• 2 Posts
  • 15 Comments
Joined 11 months ago
cake
Cake day: October 27th, 2023

help-circle



  • High Paying, Highly Innovative, Highly Hyped = Recipe for oversaturstion of students studying it, oversaturation of middle career folks switching their careers into it, recipe for an oversaturated number of non-tech folks completing every LLM, DL certificates to post on their LinkedIn.

    What happens with this oversaturation? Your raise the bar to entry - just like what leetcode culture did.

    Yay toxic elitism 🤸‍♂️






  • mofossBtoMachine Learning@academy.garden[D] Naive bayes
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    P(K=1) = 1/2

    P(a=1|K=1) = P(a=1,K=1)/P(K=1) = (1/4)/(1/2)=1/2

    P(b=1|K=1) = P(b=1,K=1)/P(K=1) = (1/8)/(1/2)=1/4

    P(c=0|K=1) = P(c=0, K=1)/P(K=1) = (1/4)/(1/2)=1/2

    P(a=1, b=1, c=0, K=1) = 0

    P(a=1, b=1, c=0, K=0) = 1/8

    [0.5 * 0.25 * 0.5] / (0 + 1/8) = (1/16) / (1/8) = 1/2

    For conditionals, convert it into joints and priors first and THEN use the table to count instances out of N samples.

    P(X|Y) = P(X,Y)/P(Y)

    :)




  • mofossBtoMachine Learning@academy.garden[D] How do you keep up ?
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I don’t - I rather focus on drilling every nook and cranny of the attention mechanism so that reading any of these papers becomes easier.

    I’d say if you truly understand Transformers both theoretically and intuitively, you’re already in the top 10% of MLEs. Though I’d imagine most PhDs understand it.