https://x.com/kylemarieb/status/1728281581306233036
New DeepMind paper just dropped.
Background: Direct Preference Optimization (DPO) is the simpler, more robust, higher performing successor of RLHF used in Zephyr, Intel’s new model, and others.
Identity-PO simplifies DPO, removing its reliance on ELO scores (and the mathematical assumptions that come with them). The authors claim this solves overfitting, which is huge if true.
The trend towards simpler solutions and sounder mathematical grounding in alignment is fun to watch. These inscrutable matrices are looking awfully controllable, and the failure modes of the old methods were things like wedding party collapse.
So that means that we can get even better finetunes in the future? Noice!
It’s better than that, imo, when you look at it in context.
Particularly in light of Intel’s finding the other day, that DPO works well (probably better) without preference data.
“Alignment” methods are getting simpler, easier, and more effective.
RLHF was a huge pain, because there were a ton of hyper parameters to tweak, and it’s expensive to get human data.
Constitutional AI (RLAIF) dealt with some of the cost and difficulty by using AI preference data, but still left the necessity for collecting preference data, and all the hyper parameter tweaking intact.
DPO eliminated the superfluous reward model, simplifying things greatly, and making overfitting less pernicious.
Intel got rid of preference data altogether.
IPO claims to fix overfitting altogether, while simplifying further.
I figure within a month, Axolotl will grow a flag that means, “and also IPO this,” with no additional cognitive overhead or hyper-parameter tuning required, and —yes— the water line for model quality is going to go up.