So Mistral-7b is a pretty impressive 7B param model … but why is it so capable? Do we have any insights into its dataset? Was it trained very far beyond the scaling limit? Any attempts at open reproductions or merges to scale up # of params?

  • DorialexandreB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    My current hunch is that they use a lot of non easily accessible online ressources (including a specific archive owned by someone named Anna).