cgt commited on
Commit
dae5af4
1 Parent(s): b1185c1

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -6
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [hfl/chinese-pert-large](https://huggingface.co/hfl/chinese-pert-large) on the cmrc2018 dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.7711
20
 
21
  ## Model description
22
 
@@ -35,21 +35,20 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - learning_rate: 2e-05
39
  - train_batch_size: 16
40
  - eval_batch_size: 16
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
- - num_epochs: 3
45
 
46
  ### Training results
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
- | 1.3393 | 1.0 | 1200 | 1.1339 |
51
- | 0.5985 | 2.0 | 2400 | 0.6423 |
52
- | 0.3696 | 3.0 | 3600 | 0.7711 |
53
 
54
 
55
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [hfl/chinese-pert-large](https://huggingface.co/hfl/chinese-pert-large) on the cmrc2018 dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.6739
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - learning_rate: 3e-05
39
  - train_batch_size: 16
40
  - eval_batch_size: 16
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
+ - num_epochs: 2
45
 
46
  ### Training results
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
+ | 0.8894 | 1.0 | 1200 | 0.6499 |
51
+ | 0.5329 | 2.0 | 2400 | 0.6739 |
 
52
 
53
 
54
  ### Framework versions