uer commited on
Commit
0d0a80c
1 Parent(s): e6b6048

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -13,7 +13,7 @@ widget:
13
 
14
  This is an xlarge Chinese Whole Word Masking RoBERTa model pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits [UER-py](https://github.com/dbiir/UER-py/) to support models with parameters above one billion, and extends it to a multimodal pre-training framework.
15
 
16
- [Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the xlarge Chinese Whole Word Masking RoBERTa model. In order to facilitate users in reproducing the results, we used a publicly available corpus and word segmentation tool, and provided all training details.
17
 
18
  You can download the model either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the link [roberta-xlarge-wwm-chinese-cluecorpussmall](https://huggingface.co/uer/roberta-xlarge-wwm-chinese-cluecorpussmall):
19
 
 
13
 
14
  This is an xlarge Chinese Whole Word Masking RoBERTa model pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits [UER-py](https://github.com/dbiir/UER-py/) to support models with parameters above one billion, and extends it to a multimodal pre-training framework.
15
 
16
+ In order to facilitate users in reproducing the results, we used a publicly available corpus and word segmentation tool, and provided all training details.
17
 
18
  You can download the model either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the link [roberta-xlarge-wwm-chinese-cluecorpussmall](https://huggingface.co/uer/roberta-xlarge-wwm-chinese-cluecorpussmall):
19