Update README.md
Browse files
README.md
CHANGED
@@ -348,11 +348,11 @@ The command below shows an example at an specific learning rate,
|
|
348 |
but you could try with other hyperparameters to obtain the best training and evaluation losses.
|
349 |
|
350 |
```
|
351 |
-
python 5.run_clm-post.py --tokenizer_name /
|
352 |
--do_train --do_eval --output_dir output --evaluation_strategy steps --eval_steps 10
|
353 |
--logging_steps 5 --save_steps 500 --num_train_epochs 28 --per_device_train_batch_size 1
|
354 |
--per_device_eval_batch_size 4 --cache_dir '.' --save_total_limit 2 --learning_rate 0.8e-04
|
355 |
-
--dataloader_drop_last True --model_name_or_path /
|
356 |
```
|
357 |
In any case, the original HuggingFace script run_clm.py can be found here:
|
358 |
https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py
|
|
|
348 |
but you could try with other hyperparameters to obtain the best training and evaluation losses.
|
349 |
|
350 |
```
|
351 |
+
python 5.run_clm-post.py --tokenizer_name AI4PD/ZymCTRL
|
352 |
--do_train --do_eval --output_dir output --evaluation_strategy steps --eval_steps 10
|
353 |
--logging_steps 5 --save_steps 500 --num_train_epochs 28 --per_device_train_batch_size 1
|
354 |
--per_device_eval_batch_size 4 --cache_dir '.' --save_total_limit 2 --learning_rate 0.8e-04
|
355 |
+
--dataloader_drop_last True --model_name_or_path AI4PD/ZymCTRL
|
356 |
```
|
357 |
In any case, the original HuggingFace script run_clm.py can be found here:
|
358 |
https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py
|