• 3 Posts
  • 7 Comments
Joined 10 months ago
cake
Cake day: November 13th, 2023

help-circle
  • You cannot. Not because you don’t have money, but because you’re asking this Q. I have a similar problem with my brother, we’d often discuss this or that idea he’d often kill the convo with “they have money/their dad’s money, we don’t”. There is probably a DNA explanation for this, even though my extended family turned out to be quite entrepreneurial, I’d say half of them were at least for decades if not all their lives in corner shops, restaurants, car dealerships. And yeah, we were people “without any money”. Maybe the car dealership was based on capital earned on the food side, but otherwise at no point did anyone in my extended family have “any money”.

    So, on the face of it, my brother could not find some inspiration or contradiction amongst a dozen or two of uncles and cousins that became entrepreneurs “without any money”. That’s probably you too.






  • GermanK20BtoLocalLLaMA@poweruser.forumdisappointed by trainers
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Maybe you overbought (like most of us), the “AI” idea. The models have in some random ways compressed the internet, more or less, and then try to decompress it. As their own warnings say, you’re most likely to get out what you most often put in, so you’re only guaranteed to get out the basics that are repeated a million times, everything else is a game of chances. Now, the reason they have their various benchmarks is, they cannot really evaluate the way you’re trying to evaluate, with your brain. Nor can they predict how to make their models better, not even for their own benchmarks. I’d say it is common knowledge that the kind of “thinking” you’re looking for is something that has just started to happen with tools on top of LLMs.

    And one last thing the average consumer has not understood about the benchmarks: when their own tests move from, let’s say, 74% to 75%, and there’s no real pattern to how they do it, maybe they tried 10 different times and 9 times it went to 73%, but they only show us the one attempt that was lucky. So basically, when they move higher and higher % in their tests, they’re also committing the ancient sin of “overfitting”, this process of training and finetuning, rinse and repeat, ends up answering questions “for the wrong reasons”, but they don’t care as long as they can show their boss, or the press, some better %. So the models might move from 75% to 85% in their benchmarks and you might get even less of what you’re looking for. Implied in what I wrote is that we need better tools to look into explainable models, and try to weed out the bad explanations with our brains!