Update README.md
Browse files
README.md
CHANGED
@@ -5,4 +5,60 @@ datasets:
|
|
5 |
metrics:
|
6 |
- sacrebleu
|
7 |
pipeline_tag: text2text-generation
|
8 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
metrics:
|
6 |
- sacrebleu
|
7 |
pipeline_tag: text2text-generation
|
8 |
+
---
|
9 |
+
|
10 |
+
### Large-Scale Pre-Training for Goal-Directed Dialog (GODEL)
|
11 |
+
|
12 |
+
GODEL is a large-scale pre-trained model for goal-directed dialogs. It is parameterized with a Transformer-based encoder-decoder model and trained for response generation grounded in external text, which allows more effective fine-tuning on dialog tasks that require conditioning the response on information that is external to the current conversation (e.g., a retrieved document). The pre-trained model can be efficiently fine-tuned and adapted to accomplish a new dialog task with a handful of task-specific dialogs. The v1.1 model is trained on 551M multi-turn dialogs from Reddit discussion thread, and 5M instruction and knowledge grounded dialogs.
|
13 |
+
|
14 |
+
##### Multi-turn generation examples from an interactive environment:
|
15 |
+
Chitchat example:
|
16 |
+
> Instruction: given a dialog context, you need to response empathically. <br>
|
17 |
+
> User: Does money buy happiness? <br>
|
18 |
+
> Agent: It is a question. Money buys you a lot of things, but not enough to buy happiness. <br>
|
19 |
+
> User: What is the best way to buy happiness ? <br>
|
20 |
+
> Agent: Happiness is bought through your experience and not money. <br>
|
21 |
+
|
22 |
+
Grounded response generation example:
|
23 |
+
> Instruction: given a dialog context and related knowledge, you need to response safely based on the knowledge. <br>
|
24 |
+
> Knowledge: The best Stardew Valley mods PCGamesN_0 / About SMAPI <br>
|
25 |
+
> User: My favorite game is stardew valley. stardew valley is very fun. <br>
|
26 |
+
> Agent: I love Stardew Valley mods, like PCGamesN_0 / About SMAPI. <br>
|
27 |
+
|
28 |
+
Please find the information about preprocessing, training and full details of the GODEL in the [project webpage](https://aka.ms/GODEL).
|
29 |
+
|
30 |
+
ArXiv paper: [https://arxiv.org/abs/2206.11309](https://arxiv.org/abs/2206.11309)
|
31 |
+
|
32 |
+
### How to use
|
33 |
+
|
34 |
+
Now we are ready to try out how the model works as a chatting partner!
|
35 |
+
|
36 |
+
```python
|
37 |
+
|
38 |
+
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
39 |
+
|
40 |
+
tokenizer = AutoTokenizer.from_pretrained("microsoft/GODEL-v1_1-base-seq2seq")
|
41 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("microsoft/GODEL-v1_1-base-seq2seq")
|
42 |
+
|
43 |
+
def generate(instruction, knowledge, dialog):
|
44 |
+
if knowledge != '':
|
45 |
+
knowledge = '[KNOWLEDGE] ' + knowledge
|
46 |
+
dialog = ' EOS '.join(dialog)
|
47 |
+
query = f"{instruction} [CONTEXT] {dialog} {knowledge}"
|
48 |
+
input_ids = tokenizer(f"{query}", return_tensors="pt").input_ids
|
49 |
+
outputs = model.generate(input_ids, max_length=128, min_length=8, top_p=0.9, do_sample=True)
|
50 |
+
output = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
51 |
+
return output
|
52 |
+
|
53 |
+
# Instruction for a chitchat task
|
54 |
+
instruction = f'Instruction: given a dialog context, you need to response empathically.'
|
55 |
+
# Leave the knowldge empty
|
56 |
+
knowledge = ''
|
57 |
+
dialog = [
|
58 |
+
'Does money buy happiness?',
|
59 |
+
'It is a question. Money buys you a lot of things, but not enough to buy happiness.',
|
60 |
+
'What is the best way to buy happiness ?'
|
61 |
+
]
|
62 |
+
response = generate(instruction, knowledge, dialog)
|
63 |
+
print(response)
|
64 |
+
```
|