ORPO: Desire Optimization with out the Supervised High-quality-tuning (SFT) Step


A less expensive alignment technique performing in addition to DPO

Generated with DALL-E

There at the moment are many strategies to align massive language fashions (LLMs) with human preferences. Reinforcement studying with human suggestions (RLHF) was one of many first and introduced us ChatGPT, however RLHF could be very pricey. DPO, IPO, and KTO are notably cheaper than RLHF as they don’t want a reward mannequin.

Whereas DPO and IPO are cheaper, they nonetheless require to coach two completely different fashions. One mannequin for the supervised fine-tuning (SFT) step, i.e., coaching the mannequin to reply directions, after which the mannequin to align with human preferences utilizing the SFT mannequin for initialization and as a reference.

ORPO is yet one more new technique for LLM alignment however this one doesn’t even want the SFT mannequin. With ORPO, the LLM collectively learns to reply directions and human preferences.

On this article, I clarify ORPO and assessment its efficiency. I present learn how to use it to show Mistral 7B right into a chat mannequin utilizing client {hardware}.

ORPO is introduced on this paper:

ORPO: Monolithic Preference Optimization without Reference Model

Leave a Reply

Your email address will not be published. Required fields are marked *