We’ve seen pretty amazing performance of mistral 7b when comparing with Llama 34B & Llama2 13B. I’m curious, theoretically, will it be possible to build an SLM, with 7-8B parameters, able to outperform GPT4 in all tasks? If so, what are potential difficulties / problems to solve? And when do you expect such SLM to come?

ps: sorry for the typo. This is my real question.

Is it possible for SLM to outperform GPT4 in all tasks?

  • FPhamB
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Short answer: Nope.

    Long answer: Nooooooooope