Look at this, apart Llama1, all the other “base” models will likely answer “language” after “As an AI”. That means Meta, Mistral AI and 01-ai (the company that made Yi) likely trained the “base” models with GPT instruct datasets to inflate the benchmark scores and make it look like the “base” models had a lot of potential, we got duped hard on that one.

https://preview.redd.it/vqtjkw1vdyzb1.png?width=653&format=png&auto=webp&s=91652053bcbc8a7b50bced9bbf8638fa417387bb

  • mcmoose1900B
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    The problem is trusting these common benchmarks in the first place… And VCs making investing decisions based on them.

    It’s insane. Its like a years old, published SAT test is the only factor for getting a job or an investment, and no one bothered to check if you’re just blatently cheating instead of cleverly cheating.

    • Wonderful_Ad_5134OPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I know right, getting that much investment on something you can easily cheat makes me sick