Papers
arxiv:2408.15240

Generative Verifiers: Reward Modeling as Next-Token Prediction

Published on Aug 27
Authors:
,
,
,
,
,

Abstract

Verifiers or reward models are often used to enhance the reasoning performance of large language models (LLMs). A common approach is the Best-of-N method, where N candidate solutions generated by the LLM are ranked by a verifier, and the best one is selected. While LLM-based verifiers are typically trained as discriminative classifiers to score solutions, they do not utilize the text generation capabilities of pretrained LLMs. To overcome this limitation, we instead propose training verifiers using the ubiquitous next-token prediction objective, jointly on verification and solution generation. Compared to standard verifiers, such generative verifiers (GenRM) can benefit from several advantages of LLMs: they integrate seamlessly with instruction tuning, enable chain-of-thought reasoning, and can utilize additional inference-time compute via majority voting for better verification. We demonstrate that when using Gemma-based verifiers on algorithmic and grade-school math reasoning tasks, GenRM outperforms discriminative verifiers and LLM-as-a-Judge, showing a 16-64% improvement in the percentage of problems solved with Best-of-N. Furthermore, we show that GenRM scales favorably across dataset size, model capacity, and inference-time compute.

Community

Interesting new Paper from Google DeepMind shows fine-tuned task-specific LLMs as Reward Models (GenRM) are better than Classification Reward Models. This seems very similar to OpenAI CriticGPTs approach 👀

Implementation:

  1. Train an LLM (GenRM) as a Verifier using synthetic CoT data, where the LLM analyzes every “section” of a response for a prompt and labels them as “correct” or “incorrect” ending with an overall “Yes” or “No” if the answer is correct response.

  2. During RLHF or creation of synthetic data, use maj@K with the GenRM model to generate a CoT verification response and then take the token probabilities of Yes/No. The overall “score” is the average probability of Yes across the K samples.

Hidden details are:
💡 Includes hyperparameters for successful Gemma2 fine-tuning
🏆 Fine-tuned small GenRMs outperform big LLM as a Judge model
📈 Keep scaling data to improve performance (~+160,000 examples for GenRM)
https://www.linkedin.com/feed/update/urn:li:activity:7235211136025919489/

Hope we get a version of this model out sometime soon

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2408.15240 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2408.15240 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2408.15240 in a Space README.md to link it from this page.

Collections including this paper 4