uer commited on
Commit
9b02143
1 Parent(s): fb1b714

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -37,18 +37,18 @@ Training data comes from three sources: [cmrc2018](https://github.com/ymcui/cmrc
37
  The model is fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We fine-tune three epochs with a sequence length of 512 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). At the end of each epoch, the model is saved when the best performance on development set is achieved.
38
 
39
  ```
40
- python3 run_cmrc.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \
41
- --vocab_path models/google_zh_vocab.txt \
42
- --train_path extractive_qa.json \
43
- --dev_path datasets/cmrc2018/dev.json \
44
- --output_model_path models/extractive_qa_model.bin \
45
- --learning_rate 3e-5 --epochs_num 3 --batch_size 32 --seq_length 512
46
  ```
47
 
48
  Finally, we convert the fine-tuned model into Huggingface's format:
49
 
50
  ```
51
- python3 scripts/convert_bert_extractive_qa_from_uer_to_huggingface.py --input_model_path extractive_qa_model.bin \
52
  --output_model_path pytorch_model.bin \
53
  --layers_num 12
54
  ```
 
37
  The model is fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We fine-tune three epochs with a sequence length of 512 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). At the end of each epoch, the model is saved when the best performance on development set is achieved.
38
 
39
  ```
40
+ python3 finetune/run_cmrc.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \
41
+ --vocab_path models/google_zh_vocab.txt \
42
+ --train_path datasets/extractive_qa.json \
43
+ --dev_path datasets/cmrc2018/dev.json \
44
+ --output_model_path models/extractive_qa_model.bin \
45
+ --learning_rate 3e-5 --epochs_num 3 --batch_size 32 --seq_length 512
46
  ```
47
 
48
  Finally, we convert the fine-tuned model into Huggingface's format:
49
 
50
  ```
51
+ python3 scripts/convert_bert_extractive_qa_from_uer_to_huggingface.py --input_model_path models/extractive_qa_model.bin \
52
  --output_model_path pytorch_model.bin \
53
  --layers_num 12
54
  ```