File size: 2,273 Bytes
a41e935
 
69af399
 
 
 
 
 
 
 
 
6b4575e
 
8acb5d9
a41e935
69af399
 
 
 
 
 
131cdc4
69af399
8acb5d9
69af399
1609db7
69af399
8acb5d9
 
69af399
 
1609db7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
license: cc-by-nc-4.0
library_name: nemo
language:
- en
pipeline_tag: text-generation
inference: false
fine-tuning: true
tags:
- nvidia
- llama2
datasets:
- Anthropic/hh-rlhf
- nvidia/sft_datablend_v1
---

# Llama2-13B-RLHF-RM 



## Description:
Llama2-13B-RLHF-RM is a 13 billion parameter language model (with context of up to 4,096 tokens) used as the Reward Model in training [NV-Llama2-70B-RLHF-Chat](https://huggingface.co/nvidia/NV-Llama2-70B-RLHF-Chat), which achieves 7.59 on MT-Bench and demonstrates strong performance on academic benchmarks. 

Starting from [Llama2-13B base model](https://huggingface.co/meta-llama/Llama-2-13b), it is first instruction-tuned with [NVIDIA SFT Datablend v1](https://huggingface.co/datasets/nvidia/sft_datablend_v1) [^1] and then trained on [HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf) with reward modeling objective. Given a conversation with multiple turns between user and assistant, it assigns a preference score on the last assistant turn.

Llama2-13B-RLHF-RM is trained with NVIDIA [NeMo-Aligner](https://github.com/NVIDIA/NeMo-Aligner), a scalable toolkit for performant and efficient model alignment. NeMo-Aligner is built using the [NeMo Framework](https://github.com/NVIDIA/NeMo) which allows for scaling training up to 1000s of GPUs using tensor, data and pipeline parallelism for all components of alignment. All of our checkpoints are cross compatible with the NeMo ecosystem, allowing for inference deployment and further customization.


[^1]: as well as ~5k proprietary datapoints that we are unable to release due to data vendor restrictions 
## Usage:

Training a reward model is an essential component of Reinforcement Learning from Human Feedback (RLHF). By developing a strong reward model, we can mitigate the risks of reward hacking and ensure that the actor is incentivized to produce helpful responses. We are open-sourcing this reward model so that users can seamlessly integrate it with Proximal Policy Optimization (PPO) training using [NeMo-Aligner](https://github.com/NVIDIA/NeMo-Aligner). For detailed instructions on how to conduct the training, please refer to our [RLHF training user guide](https://github.com/NVIDIA/NeMo-Aligner/blob/main/docs/user-guide/RLHF.rst).