Announcing Llama-rephraser: 13B models reaching GPT-4 performance in major benchmarks (MMLU/GSK-8K/HumanEval)!
To ensure result validity, we followed Open...
if you’re interested in running your own models for any reason, you really should build your own evaluation dataset for the scenarios you care about.
at this point, all the public benchmarks are such a mess. Do you really care if the model you select has the highest MMLU? Or, do you care only that it’s the best-performing model for the scenarios you actually need?
if you’re interested in running your own models for any reason, you really should build your own evaluation dataset for the scenarios you care about.
at this point, all the public benchmarks are such a mess. Do you really care if the model you select has the highest MMLU? Or, do you care only that it’s the best-performing model for the scenarios you actually need?