6

Google DeepMind Alignment with Online AI Feedback (OAIF)

 7 months ago
source link: https://arxiv.org/abs/2402.04792
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Computer Science > Artificial Intelligence

[Submitted on 7 Feb 2024]

Direct Language Model Alignment from Online AI Feedback

Download PDF

Direct alignment from preferences (DAP) methods, such as DPO, have recently emerged as efficient alternatives to reinforcement learning from human feedback (RLHF), that do not require a separate reward model. However, the preference datasets used in DAP methods are usually collected ahead of training and never updated, thus the feedback is purely offline. Moreover, responses in these datasets are often sampled from a language model distinct from the one being aligned, and since the model evolves over training, the alignment phase is inevitably off-policy. In this study, we posit that online feedback is key and improves DAP methods. Our method, online AI feedback (OAIF), uses an LLM as annotator: on each training iteration, we sample two responses from the current model and prompt the LLM annotator to choose which one is preferred, thus providing online feedback. Despite its simplicity, we demonstrate via human evaluation in several tasks that OAIF outperforms both offline DAP and RLHF methods. We further show that the feedback leveraged in OAIF is easily controllable, via instruction prompts to the LLM annotator.
Comments: 18 pages, 8 figures, 4 tables
Subjects: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Human-Computer Interaction (cs.HC)
Cite as: arXiv:2402.04792 [cs.AI]
  (or arXiv:2402.04792v1 [cs.AI] for this version)

Submission history

From: Shangmin Guo [view email]
[v1] Wed, 7 Feb 2024 12:31:13 UTC (2,947 KB)

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK