• 3 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: October 30th, 2023

help-circle



  • Not for the kind of merging I’ve seen. But I remember a paper back in the day that suggested you could find high-dimensional axes within different models, and if you rotated the weights to align, you could merge different models to your advantage, and maintain knowledge from both seed models. This included models that were trained from different initializations.

    I think that the only reason this franken-merging works is because people are mostly just merging finetunes of the same base, so these high-d vectors are already aligned enough that the mergers work.



  • This doesn’t seem cost-effective for what you’d get.

    I agree, which is why I’m bearish on model merges, unless you’re mixing model families (IE mistral + Llama).

    These franken-merges are just interweaving finetunes of the same base model in a way that, it’d make more sense to me if they just collapsed all params into a same-sized model via element-wise interpolation. So, merging weights makes sense, but running params in parallel like these X-120B, there’s no payout I can see in doing that beyond collapsing the weights.