According to this tweet,

when gpt4 first finished training it didn’t actually work very well and the whole team thought it’s over, scaling is dead…until greg went into a cave for weeks and somehow magically made it work

So gpt-4 was kind of broken at first. Then greg spent a few weeks trying to fix it and then it somehow worked.

So why did it not work at first and how did they fix it?
I think this is an important question to the OSS community,

  • troposferB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    These are stories, 1 man solve it all , just the right timing by the way with all this opanai saga