Look at this, apart Llama1, all the other “base” models will likely answer “language” after “As an AI”. That means Meta, Mistral AI and 01-ai (the company that made Yi) likely trained the “base” models with GPT instruct datasets to inflate the benchmark scores and make it look like the “base” models had a lot of potential, we got duped hard on that one.
Llama2 has been pre-trained on old data (before the chatGPT AI poisoning was significant)
https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md
“Data Freshness The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.”
“Model Dates Llama 2 was trained between January 2023 and July 2023.”
StableLM3b has been trained on more recent datasets (cutoff of march 2023) yet it doesn’t have this amount of chatgpt poisoning in it
https://huggingface.co/stabilityai/stablelm-base-alpha-3b-v2
https://preview.redd.it/gl46fo50n10c1.png?width=518&format=png&auto=webp&s=c7cae52b292dcba45dee735a4ca7efac5630a4ae