im running M1/16 gig. Id like to get the speed and understanding that claude ai provides. I can throw it some code and documentation and it writes back very good advice.
What kind of models and extra hardware do i need to replicate the experience locally? I am using mistral 7b right now
There is no way it has “undiluted” 100k context. https://news.ycombinator.com/item?id=36374936
But yea, it IS impressive.