• its_just_andyB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    if you’re interested in running your own models for any reason, you really should build your own evaluation dataset for the scenarios you care about.

    at this point, all the public benchmarks are such a mess. Do you really care if the model you select has the highest MMLU? Or, do you care only that it’s the best-performing model for the scenarios you actually need?

  • ambient_temp_xenoB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    To be fair, it’s pretty clear that openai update their models with every kind of test people throw at them as well.

  • DreamGenXB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It’s inevitable people will game the system when it’s so easy, and the payoff can be huge. Not so long ago people could still get huge VC checks for showing off GitHub stars or benchmark numbers.

  • Monkey_1505B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The problem isn’t the training data, it’s the benchmarks.