uer commited on
Commit
e6b6048
1 Parent(s): ae16132

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -81
README.md CHANGED
@@ -7,41 +7,15 @@ widget:
7
 
8
 
9
  ---
10
- # Chinese Whole Word Masking RoBERTa Miniatures
11
 
12
  ## Model description
13
 
14
- This is the set of 6 Chinese Whole Word Masking RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). Besides, the models could also be pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits UER-py to support models with parameters above one billion, and extends it to a multimodal pre-training framework.
15
 
16
- [Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 6 Chinese Whole Word Masking RoBERTa models. In order to facilitate users in reproducing the results, we used a publicly available corpus and word segmentation tool, and provided all training details.
17
 
18
- You can download the 6 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
19
-
20
- | | Link |
21
- | -------- | :-----------------------: |
22
- | **Tiny** | [**2/128 (Tiny)**][2_128] |
23
- | **Mini** | [**4/256 (Mini)**][4_256] |
24
- | **Small** | [**4/512 (Small)**][4_512] |
25
- | **Medium** | [**8/512 (Medium)**][8_512] |
26
- | **Base** | [**12/768 (Base)**][12_768] |
27
- | **Large** | [**24/1024 (Large)**][24_1024] |
28
-
29
- Here are scores on the devlopment set of six Chinese tasks:
30
-
31
- | Model | Score | book_review | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
32
- | ------------------ | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
33
- | RoBERTa-Tiny-WWM | 72.2 | 83.7 | 91.8 | 81.8 | 62.1 | 55.4 | 58.6 |
34
- | RoBERTa-Mini-WWM | 76.3 | 86.4 | 93.0 | 86.8 | 64.4 | 58.7 | 68.8 |
35
- | RoBERTa-Small-WWM | 77.6 | 88.1 | 93.8 | 87.2 | 65.2 | 59.6 | 71.4 |
36
- | RoBERTa-Medium-WWM | 78.6 | 89.3 | 94.4 | 88.8 | 66.0 | 59.9 | 73.2 |
37
- | RoBERTa-Base-WWM | 80.2 | 90.6 | 95.8 | 89.4 | 67.5 | 61.8 | 76.2 |
38
- | RoBERTa-Large-WWM | 81.1 | 91.1 | 95.8 | 90.0 | 68.5 | 62.1 | 79.1 |
39
-
40
- For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
41
-
42
- - epochs: 3, 5, 8
43
- - batch sizes: 32, 64
44
- - learning rates: 3e-5, 1e-4, 3e-4
45
 
46
  ## How to use
47
 
@@ -49,41 +23,40 @@ You can use this model directly with a pipeline for masked language modeling:
49
 
50
  ```python
51
  >>> from transformers import pipeline
52
- >>> unmasker = pipeline('fill-mask', model='uer/roberta-tiny-wwm-chinese-cluecorpussmall')
53
  >>> unmasker("北京是[MASK]国的首都。")
54
  [
55
- {'score': 0.294228732585907,
56
- 'token': 704,
57
- 'token_str': '中',
58
- 'sequence': '北 京 是 中 国 的 首 都 。'},
59
- {'score': 0.19691626727581024,
60
- 'token': 1266,
61
- 'token_str': '',
62
- 'sequence': '北 京 是 国 的 首 都 。'},
63
- {'score': 0.1070084273815155,
64
- 'token': 7506,
65
- 'token_str': '',
66
- 'sequence': '北 京 是 国 的 首 都 。'},
67
- {'score': 0.031527262181043625,
68
- 'token': 2769,
69
- 'token_str': '',
70
- 'sequence': '北 京 是 国 的 首 都 。'},
71
- {'score': 0.023054633289575577,
72
- 'token': 1298,
73
- 'token_str': '',
74
- 'sequence': '北 京 是 国 的 首 都 。'}
75
  ]
76
 
77
 
78
-
79
  ```
80
 
81
  Here is how to use this model to get the features of a given text in PyTorch:
82
 
83
  ```python
84
  from transformers import BertTokenizer, BertModel
85
- tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall')
86
- model = BertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall")
87
  text = "用你喜欢的任何文本替换我。"
88
  encoded_input = tokenizer(text, return_tensors='pt')
89
  output = model(**encoded_input)
@@ -93,8 +66,8 @@ and in TensorFlow:
93
 
94
  ```python
95
  from transformers import BertTokenizer, TFBertModel
96
- tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall')
97
- model = TFBertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall")
98
  text = "用你喜欢的任何文本替换我。"
99
  encoded_input = tokenizer(text, return_tensors='tf')
100
  output = model(encoded_input)
@@ -106,12 +79,10 @@ output = model(encoded_input)
106
 
107
  ## Training procedure
108
 
109
- Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
110
 
111
  [jieba](https://github.com/fxsjy/jieba) is used as word segmentation tool.
112
 
113
- Taking the case of Whole Word Masking RoBERTa-Medium
114
-
115
  Stage1:
116
 
117
  ```
@@ -123,17 +94,24 @@ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
123
  ```
124
 
125
  ```
126
- python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \
127
  --vocab_path models/google_zh_vocab.txt \
128
- --config_path models/bert/medium_config.json \
129
- --output_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin \
130
  --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
131
- --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
132
- --learning_rate 1e-4 --batch_size 64 \
133
- --whole_word_masking \
134
  --data_processor mlm --target mlm
135
  ```
136
 
 
 
 
 
 
 
 
137
  Stage2:
138
 
139
  ```
@@ -145,24 +123,31 @@ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
145
  ```
146
 
147
  ```
148
- python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
149
  --vocab_path models/google_zh_vocab.txt \
150
- --pretrained_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin-1000000 \
151
- --config_path models/bert/medium_config.json \
152
- --output_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \
153
  --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
154
- --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
155
- --learning_rate 5e-5 --batch_size 16 \
156
- --whole_word_masking \
157
  --data_processor mlm --target mlm
158
  ```
159
 
 
 
 
 
 
 
 
160
  Finally, we convert the pre-trained model into Huggingface's format:
161
 
162
  ```
163
- python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \
164
  --output_model_path pytorch_model.bin \
165
- --layers_num 8 --type mlm
166
  ```
167
 
168
  ### BibTeX entry and citation info
@@ -182,11 +167,5 @@ python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path model
182
  journal={ACL 2023},
183
  pages={217},
184
  year={2023}
 
185
  ```
186
-
187
- [2_128]:https://huggingface.co/uer/roberta-tiny-wwm-chinese-cluecorpussmall
188
- [4_256]:https://huggingface.co/uer/roberta-mini-wwm-chinese-cluecorpussmall
189
- [4_512]:https://huggingface.co/uer/roberta-small-wwm-chinese-cluecorpussmall
190
- [8_512]:https://huggingface.co/uer/roberta-medium-wwm-chinese-cluecorpussmall
191
- [12_768]:https://huggingface.co/uer/roberta-base-wwm-chinese-cluecorpussmall
192
- [24_1024]:https://huggingface.co/uer/roberta-large-wwm-chinese-cluecorpussmall
 
7
 
8
 
9
  ---
10
+ # Chinese Xlarge Whole Word Masking RoBERTa Model
11
 
12
  ## Model description
13
 
14
+ This is an xlarge Chinese Whole Word Masking RoBERTa model pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits [UER-py](https://github.com/dbiir/UER-py/) to support models with parameters above one billion, and extends it to a multimodal pre-training framework.
15
 
16
+ [Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the xlarge Chinese Whole Word Masking RoBERTa model. In order to facilitate users in reproducing the results, we used a publicly available corpus and word segmentation tool, and provided all training details.
17
 
18
+ You can download the model either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the link [roberta-xlarge-wwm-chinese-cluecorpussmall](https://huggingface.co/uer/roberta-xlarge-wwm-chinese-cluecorpussmall):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  ## How to use
21
 
 
23
 
24
  ```python
25
  >>> from transformers import pipeline
26
+ >>> unmasker = pipeline('fill-mask', model='uer/roberta-xlarge-wwm-chinese-cluecorpussmall')
27
  >>> unmasker("北京是[MASK]国的首都。")
28
  [
29
+ {'score': 0.9298505783081055,
30
+ 'token': 704,
31
+ 'token_str': '中',
32
+ 'sequence': '北 京 是 中 国 的 首 都 。'},
33
+ {'score': 0.05041525512933731,
34
+ 'token': 2769,
35
+ 'token_str': '',
36
+ 'sequence': '北 京 是 国 的 首 都 。'},
37
+ {'score': 0.004921116400510073,
38
+ 'token': 4862,
39
+ 'token_str': '',
40
+ 'sequence': '北 京 是 国 的 首 都 。'},
41
+ {'score': 0.0020684923510998487,
42
+ 'token': 3696,
43
+ 'token_str': '',
44
+ 'sequence': '北 京 是 国 的 首 都 。'},
45
+ {'score': 0.0018144999630749226,
46
+ 'token': 3926,
47
+ 'token_str': '',
48
+ 'sequence': '北 京 是 国 的 首 都 。'}
49
  ]
50
 
51
 
 
52
  ```
53
 
54
  Here is how to use this model to get the features of a given text in PyTorch:
55
 
56
  ```python
57
  from transformers import BertTokenizer, BertModel
58
+ tokenizer = BertTokenizer.from_pretrained('uer/roberta-xlarge-wwm-chinese-cluecorpussmall')
59
+ model = BertModel.from_pretrained("uer/roberta-xlarge-wwm-chinese-cluecorpussmall")
60
  text = "用你喜欢的任何文本替换我。"
61
  encoded_input = tokenizer(text, return_tensors='pt')
62
  output = model(**encoded_input)
 
66
 
67
  ```python
68
  from transformers import BertTokenizer, TFBertModel
69
+ tokenizer = BertTokenizer.from_pretrained('uer/roberta-xlarge-wwm-chinese-cluecorpussmall')
70
+ model = TFBertModel.from_pretrained("uer/roberta-xlarge-wwm-chinese-cluecorpussmall")
71
  text = "用你喜欢的任何文本替换我。"
72
  encoded_input = tokenizer(text, return_tensors='tf')
73
  output = model(encoded_input)
 
79
 
80
  ## Training procedure
81
 
82
+ Models are pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 500,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512.
83
 
84
  [jieba](https://github.com/fxsjy/jieba) is used as word segmentation tool.
85
 
 
 
86
  Stage1:
87
 
88
  ```
 
94
  ```
95
 
96
  ```
97
+ deepspeed pretrain.py --deepspeed --deepspeed_config models/deepspeed_config.json --dataset_path cluecorpussmall_seq128_dataset.pt \
98
  --vocab_path models/google_zh_vocab.txt \
99
+ --config_path models/bert/xlarge_config.json \
100
+ --output_model_path models/cluecorpussmall_wwm_roberta_xlarge_seq128_model \
101
  --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
102
+ --total_steps 500000 --save_checkpoint_steps 50000 --report_steps 500 \
103
+ --learning_rate 2e-5 --batch_size 128 --deep_init \
104
+ --whole_word_masking --deepspeed_checkpoint_activations \
105
  --data_processor mlm --target mlm
106
  ```
107
 
108
+ Before stage2, we extract fp32 consolidated weights from a zero 2 and 3 DeepSpeed checkpoints:
109
+
110
+ ```
111
+ python3 models/cluecorpussmall_wwm_roberta_xlarge_seq128_model/zero_to_fp32.py models/cluecorpussmall_wwm_roberta_xlarge_seq128_model/ \
112
+ models/cluecorpussmall_wwm_roberta_xlarge_seq128_model.bin
113
+ ```
114
+
115
  Stage2:
116
 
117
  ```
 
123
  ```
124
 
125
  ```
126
+ deepspeed pretrain.py --deepspeed --deepspeed_config models/deepspeed_config.json --dataset_path cluecorpussmall_seq512_dataset.pt \
127
  --vocab_path models/google_zh_vocab.txt \
128
+ --config_path models/bert/xlarge_config.json \
129
+ --pretrained_model_path models/cluecorpussmall_wwm_roberta_xlarge_seq128_model.bin \
130
+ --output_model_path models/cluecorpussmall_wwm_roberta_xlarge_seq512_model \
131
  --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
132
+ --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 500 \
133
+ --learning_rate 5e-5 --batch_size 32 \
134
+ --whole_word_masking --deepspeed_checkpoint_activations \
135
  --data_processor mlm --target mlm
136
  ```
137
 
138
+ Then, we extract fp32 consolidated weights from a zero 2 and 3 DeepSpeed checkpoints:
139
+
140
+ ```
141
+ python3 models/cluecorpussmall_wwm_roberta_xlarge_seq512_model/zero_to_fp32.py models/cluecorpussmall_wwm_roberta_xlarge_seq512_model/ \
142
+ models/cluecorpussmall_wwm_roberta_xlarge_seq512_model.bin
143
+ ```
144
+
145
  Finally, we convert the pre-trained model into Huggingface's format:
146
 
147
  ```
148
+ python3 scripts/convert_bert_from_tencentpretrain_to_huggingface.py --input_model_path models/cluecorpussmall_wwm_roberta_xlarge_seq512_model.bin \
149
  --output_model_path pytorch_model.bin \
150
+ --layers_num 36 --type mlm
151
  ```
152
 
153
  ### BibTeX entry and citation info
 
167
  journal={ACL 2023},
168
  pages={217},
169
  year={2023}
170
+ }
171
  ```