im running M1/16 gig. Id like to get the speed and understanding that claude ai provides. I can throw it some code and documentation and it writes back very good advice.

What kind of models and extra hardware do i need to replicate the experience locally? I am using mistral 7b right now

  • AutomataManifoldB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Claude has two big things: very long context length and high understanding (or whatever we want to call it).

    The context length is the hardest part at the moment, I think. Though understanding is hard to measure.