Look at this, apart Llama1, all the other “base” models will likely answer “language” after “As an AI”. That means Meta, Mistral AI and 01-ai (the company that made Yi) likely trained the “base” models with GPT instruct datasets to inflate the benchmark scores and make it look like the “base” models had a lot of potential, we got duped hard on that one.
The problem is trusting these common benchmarks in the first place… And VCs making investing decisions based on them.
It’s insane. Its like a years old, published SAT test is the only factor for getting a job or an investment, and no one bothered to check if you’re just blatently cheating instead of cleverly cheating.
I know right, getting that much investment on something you can easily cheat makes me sick