Canwen Xu
commited on
Commit
•
e13d7bc
1
Parent(s):
51b8a71
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- zh
|
4 |
+
tags:
|
5 |
+
- generative language model
|
6 |
+
license: mit
|
7 |
+
datasets:
|
8 |
+
- 100GB Chinese corpus
|
9 |
+
---
|
10 |
+
# CPM-Generate
|
11 |
+
|
12 |
+
## Model description
|
13 |
+
|
14 |
+
CPM (Chinese Pre-trained Language Model) is a Transformer-based autoregressive language model, with 2.6 billion parameters and 100GB Chinese training data. To the best of our knowledge, CPM is the largest Chinese pre-trained language model, which could facilitate downstream Chinese NLP tasks, such as conversation, essay generation, cloze test, and language understanding. [[Project](https://cpm.baai.ac.cn)] [[Model](https://cpm.baai.ac.cn/download.html)] [[Paper](https://arxiv.org/abs/2012.00413)]
|
15 |
+
|
16 |
+
## Intended uses & limitations
|
17 |
+
|
18 |
+
#### How to use
|
19 |
+
|
20 |
+
```python
|
21 |
+
from transformers import TextGenerationPipeline, AutoTokenizer, AutoModelWithLMHead
|
22 |
+
|
23 |
+
tokenizer = AutoTokenizer.from_pretrained("TsinghuaAI/CPM-Generate")
|
24 |
+
model = AutoModelWithLMHead.from_pretrained("TsinghuaAI/CPM-Generate")
|
25 |
+
|
26 |
+
text_generator = TextGenerationPipeline(model, tokenizer)
|
27 |
+
text_generator('清华大学', max_length=50, do_sample=True, top_p=0.9)
|
28 |
+
```
|
29 |
+
|
30 |
+
#### Limitations and bias
|
31 |
+
|
32 |
+
The text generated by CPM is automatically generated by a neural network model trained on a large number of texts, which does not represent our official attitudes and preferences. The text generated by CPM is only used for technical and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it, but contact us and we will deal with it promptly.
|
33 |
+
|
34 |
+
## Training data
|
35 |
+
|
36 |
+
We collect different kinds of texts in our pre-training, including encyclopedia, news, novels, and Q\&A. The details of our training data are shown as follows.
|
37 |
+
|
38 |
+
| Data Source | Encyclopedia | Webpage | Story | News | Dialog |
|
39 |
+
| ----------- | ------------ | ------- | ----- | ----- | ------ |
|
40 |
+
| **Size** | ~40GB | ~39GB | ~10GB | ~10GB | ~1GB |
|
41 |
+
|
42 |
+
## Training procedure
|
43 |
+
|
44 |
+
Based on the hyper-parameter searching on the learning rate and batch size, we set the learning rate as $1.5\times10^{-4}$ and the batch size as $3,072$, which makes the model training more stable. In the first version, we still adopt the dense attention and the max sequence length is $1,024$. We will implement sparse attention in the future. We pre-train our model for $20,000$ steps, and the first $5,000$ steps are for warm-up. The optimizer is Adam. It takes two weeks to train our largest model using $64$ NVIDIA V100.
|
45 |
+
|
46 |
+
## Eval results
|
47 |
+
|
48 |
+
| | $n_{{param}}$ | $n_{{layers}}$ | $d_{{model}}$ | $n_{{heads}}$ | $d_{{head}}$ |
|
49 |
+
|------------|-------------------:|--------------------:|-------------------:|-------------------:|------------------:|
|
50 |
+
| CPM-Small | 109M | 12 | 768 | 12 | 64 |
|
51 |
+
| CPM-Medium | 334M | 24 | 1,024 | 16 | 64 |
|
52 |
+
| CPM-Large | 2.6B | 32 | 2,560 | 32 | 80 |
|
53 |
+
|
54 |
+
We evaluate CPM with different numbers of parameters (the details are shown above) on various Chinese NLP tasks in the few-shot (even zero-shot) settings. With the increase of parameters, CPM performs better on most datasets, indicating that larger models are more proficient at language generation and language understanding. We provide results of text classification, chinese idiom cloze test, and short text conversation generation as follows. Please refer to our [paper](https://arxiv.org/abs/2012.00413) for more detailed results.
|
55 |
+
|
56 |
+
|
57 |
+
### Zero-shot performance on text classification tasks
|
58 |
+
|
59 |
+
| | TNEWS | IFLYTEK | OCNLI |
|
60 |
+
| ---------- | :------------: | :------------: | :------------: |
|
61 |
+
| CPM-Small | 0.626 | 0.584 | 0.378 |
|
62 |
+
| CPM-Medium | 0.618 | 0.635 | 0.379 |
|
63 |
+
| CPM-Large | **0.703** | **0.708** | **0.442** |
|
64 |
+
|
65 |
+
### Performance on Chinese Idiom Cloze (ChID) dataset
|
66 |
+
| | Supervised | Unsupervised |
|
67 |
+
|------------|:--------------:|:--------------:|
|
68 |
+
| CPM-Small | 0.657 | 0.433 |
|
69 |
+
| CPM-Medium | 0.695 | 0.524 |
|
70 |
+
| CPM-Large | **0.804** | **0.685** |
|
71 |
+
|
72 |
+
### Performance on Short Text Conversation Generation (STC) dataset
|
73 |
+
| | Average | Extrema | Greedy | Dist-1 | Dist-2 |
|
74 |
+
|----------------------------------|:--------------:|:--------------:|:--------------:|:-------------------------------:|:--------------------------------:|
|
75 |
+
| *Few-shot (Unsupervised)* | | | | | |
|
76 |
+
| CDial-GPT | 0.899 | 0.797 | 0.810 | 1,963 / **0.011** | 20,814 / 0.126 |
|
77 |
+
| CPM-Large | **0.928** | **0.805** | **0.815** | **3,229** / 0.007 | **68,008** / **0.154** |
|
78 |
+
| *Supervised* | | | | | |
|
79 |
+
| CDial-GPT | 0.933 | **0.814** | **0.826** | 2,468 / 0.008 | 35,634 / 0.127 |
|
80 |
+
| CPM-Large | **0.934** | 0.810 | 0.819 | **3,352** / **0.011** | **67,310** / **0.233** |
|
81 |
+
|
82 |
+
|
83 |
+
|
84 |
+
|
85 |
+
### BibTeX entry and citation info
|
86 |
+
|
87 |
+
```bibtex
|
88 |
+
@article{cpm-v1,
|
89 |
+
title={CPM: A Large-scale Generative Chinese Pre-trained Language Model},
|
90 |
+
author={Zhang, Zhengyan and Han, Xu, and Zhou, Hao, and Ke, Pei, and Gu, Yuxian and Ye, Deming and Qin, Yujia and Su, Yusheng and Ji, Haozhe and Guan, Jian and Qi, Fanchao and Wang, Xiaozhi and Zheng, Yanan and Zeng, Guoyang and Cao, Huanqi and Chen, Shengqi and Li, Daixuan and Sun, Zhenbo and Liu, Zhiyuan and Huang, Minlie and Han, Wentao and Tang, Jie and Li, Juanzi and Sun, Maosong},
|
91 |
+
year={2020}
|
92 |
+
}
|
93 |
+
```
|