RuiyangSun commited on
Commit
588a9a4
1 Parent(s): 0e42156

docs: update readme

Browse files
Files changed (1) hide show
  1. README.md +84 -1
README.md CHANGED
@@ -1,3 +1,86 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ datasets:
3
+ - PKU-Alignment/PKU-SafeRLHF
4
+ language:
5
+ - en
6
+ tags:
7
+ - reinforcement-learning-from-human-feedback
8
+ - reinforcement-learning
9
+ - beaver
10
+ - safety
11
+ - llama
12
+ - ai-safety
13
+ - deepspeed
14
+ - rlhf
15
+ - alpaca
16
+ library_name: safe-rlhf
17
  ---
18
+
19
+ # 🦫 Beaver's Cost Model
20
+
21
+ ## Model Details
22
+
23
+ The Beaver Cost model is a preference model trained using the [PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF dataset).
24
+ It can play a role in the safe RLHF algorithm, helping the Beaver model become more safe and harmless.
25
+
26
+ - **Developed by:** the [PKU-Alignment](https://github.com/PKU-Alignment) Team.
27
+ - **Model Type:** An auto-regressive language model based on the transformer architecture.
28
+ - **License:** Non-commercial license.
29
+ - **Fine-tuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca).
30
+
31
+ ## Model Sources
32
+
33
+ - **Repository:** <https://github.com/PKU-Alignment/safe-rlhf>
34
+ - **Beaver:** <https://huggingface.co/PKU-Alignment/beaver-7b-v1.0>
35
+ - **Dataset:** <https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF>
36
+ - **Reward Model:** <https://huggingface.co/PKU-Alignment/beaver-7b-v1.0-reward>
37
+ - **Cost Model:** <https://huggingface.co/PKU-Alignment/beaver-7b-v1.0-cost>
38
+ - **Paper:** *Coming soon...*
39
+
40
+ ## How to Use the Cost Model
41
+
42
+ ```python
43
+ from transformers import AutoTokenizer
44
+ from safe_rlhf.models import AutoModelForScore
45
+
46
+ model = AutoModelForScore.from_pretrained('PKU-Alignment/beaver-7b-v1.0-cost', device_map='auto')
47
+ tokenizer = AutoTokenizer.from_pretrained('PKU-Alignment/beaver-7b-v1.0-cost', use_fast=False)
48
+
49
+ input = 'BEGINNING OF CONVERSATION: USER: hello ASSISTANT:Hello! How can I help you today?'
50
+
51
+ input_ids = tokenizer(input, return_tensors='pt')
52
+ output = model(**input_ids)
53
+ print(output)
54
+
55
+ # ScoreModelOutput(
56
+ # scores=tensor([[[-19.6476],
57
+ # [-20.2238],
58
+ # [-21.4228],
59
+ # [-19.2506],
60
+ # [-20.2728],
61
+ # [-23.8799],
62
+ # [-22.6898],
63
+ # [-21.5825],
64
+ # [-21.0855],
65
+ # [-20.2068],
66
+ # [-23.8296],
67
+ # [-21.4940],
68
+ # [-21.9484],
69
+ # [-13.1220],
70
+ # [ -6.4499],
71
+ # [ -8.1982],
72
+ # [ -7.2492],
73
+ # [ -9.3377],
74
+ # [-13.5010],
75
+ # [-10.4932],
76
+ # [ -9.7837],
77
+ # [ -6.4540],
78
+ # [ -6.0084],
79
+ # [ -5.8093],
80
+ # [ -6.6134],
81
+ # [ -5.8995],
82
+ # [ -9.1505],
83
+ # [-11.3254]]], grad_fn=<ToCopyBackward0>),
84
+ # end_scores=tensor([[-11.3254]], grad_fn=<ToCopyBackward0>)
85
+ # )
86
+ ```