So Mistral-7b is a pretty impressive 7B param model … but why is it so capable? Do we have any insights into its dataset? Was it trained very far beyond the scaling limit? Any attempts at open reproductions or merges to scale up # of params?

  • CharuruB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The results are okay, but I’m hard-pressed to call it “very capable”. My perspective on it is that other bigger models are making mistakes they shouldn’t be making because they were “trained wrong”.