If i have multiple 7b models where each model is trained on one specific topic (e.g. roleplay, math, coding, history, politic…) and i have an interface which decides depending on the context which model to use. Could this outperform bigger models while being faster?

  • vasileerB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 年前

    the question was if multiple small models can beat a single big model but also having the speed advantage, and answer is yes, and an example of that is MOE, which is a collection of small models all inside a single big model,

    https://huggingface.co/google/switch-c-2048 is a such example