`evaluation_strategy` is deprecated and will be removed in version 4.46 of Transformers
#13
by
fstocco
- opened
README.md
CHANGED
@@ -352,7 +352,7 @@ but you could try with other hyperparameters to obtain the best training and eva
|
|
352 |
|
353 |
```
|
354 |
python 5.run_clm-post.py --tokenizer_name AI4PD/ZymCTRL
|
355 |
-
--do_train --do_eval --output_dir output --
|
356 |
--logging_steps 5 --save_steps 500 --num_train_epochs 28 --per_device_train_batch_size 1
|
357 |
--per_device_eval_batch_size 4 --cache_dir '.' --save_total_limit 2 --learning_rate 0.8e-04
|
358 |
--dataloader_drop_last True --model_name_or_path AI4PD/ZymCTRL
|
|
|
352 |
|
353 |
```
|
354 |
python 5.run_clm-post.py --tokenizer_name AI4PD/ZymCTRL
|
355 |
+
--do_train --do_eval --output_dir output --eval_strategy steps --eval_steps 10
|
356 |
--logging_steps 5 --save_steps 500 --num_train_epochs 28 --per_device_train_batch_size 1
|
357 |
--per_device_eval_batch_size 4 --cache_dir '.' --save_total_limit 2 --learning_rate 0.8e-04
|
358 |
--dataloader_drop_last True --model_name_or_path AI4PD/ZymCTRL
|