Text Generation
PyTorch
English
opt
t1101675 commited on
Commit
99263f3
1 Parent(s): 5326b89

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -3
README.md CHANGED
@@ -1,3 +1,47 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - databricks/databricks-dolly-15k
5
+ language:
6
+ - en
7
+ metrics:
8
+ - rouge
9
+ base_model:
10
+ - facebook/opt-1.3b
11
+ pipeline_tag: text-generation
12
+ ---
13
+ # MiniLLM-OPT-1.3B
14
+
15
+ [paper](https://arxiv.org/abs/2306.08543) | [code](https://github.com/microsoft/LMOps/tree/main/minillm)
16
+
17
+ **MiniLLM-OPT-1.3B** is a OPT-1.3B model distilled from [OPT-13B](https://huggingface.co/MiniLLM/teacher-OPT-13B) on [databricks-dolly-15k](https://huggingface.co/datasets/aisquared/databricks-dolly-15k)
18
+
19
+ <p align='left'>
20
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/7hBWGZzYMJihCRQ70XoiQ.png" width="1000">
21
+ </p>
22
+
23
+ **Note**: MiniLLM requires an [SFT model](https://huggingface.co/MiniLLM/init-opt-1.3B) for initilization to perform the PPO optimization.
24
+
25
+ ## Evaluation
26
+
27
+ We ask GPT-4 to give scores for the generated responses of MiniLLM. The prompts are taken from [databricks-dolly-15k](https://huggingface.co/datasets/aisquared/databricks-dolly-15k) (test set), [self-instruct](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json), and [vicuna](https://github.com/lm-sys/vicuna-blog-eval)
28
+
29
+ <p align='left'>
30
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/rDXnaDbKH5mBYAmqGC-_a.png" width="1000">
31
+ </p>
32
+
33
+ ## Baseline Models
34
+ + [SFT w/o KD](https://huggingface.co/MiniLLM/SFT-opt-1.3B)
35
+ + [KD](https://huggingface.co/MiniLLM/KD-opt-1.3B)
36
+ + [SeqKD](https://huggingface.co/MiniLLM/SeqKD-opt-1.3B)
37
+
38
+
39
+ ## Citation
40
+ ```
41
+ @inproceedings{minillm,
42
+ title={MiniLLM: Knowledge Distillation of Large Language Models},
43
+ author={Gu, Yuxian and Dong, Li and Wei, Furu and Huang, Minlie},
44
+ booktitle={Proceedings of ICLR},
45
+ year={2024}
46
+ }
47
+ ```