akshaya-244 commited on
Commit
4e892cb
1 Parent(s): 64100ea

End of training

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -37,12 +37,13 @@ The following hyperparameters were used during training:
37
  - train_batch_size: 1
38
  - eval_batch_size: 8
39
  - seed: 42
40
- - gradient_accumulation_steps: 8
41
- - total_train_batch_size: 8
42
- - optimizer: Use adamw_hf with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 2
45
  - num_epochs: 1
 
46
 
47
  ### Training results
48
 
@@ -51,7 +52,7 @@ The following hyperparameters were used during training:
51
  ### Framework versions
52
 
53
  - PEFT 0.13.2
54
- - Transformers 4.46.2
55
  - Pytorch 2.4.0
56
  - Datasets 3.1.0
57
  - Tokenizers 0.20.0
 
37
  - train_batch_size: 1
38
  - eval_batch_size: 8
39
  - seed: 42
40
+ - gradient_accumulation_steps: 4
41
+ - total_train_batch_size: 4
42
+ - optimizer: Use adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 2
45
  - num_epochs: 1
46
+ - mixed_precision_training: Native AMP
47
 
48
  ### Training results
49
 
 
52
  ### Framework versions
53
 
54
  - PEFT 0.13.2
55
+ - Transformers 4.47.0.dev0
56
  - Pytorch 2.4.0
57
  - Datasets 3.1.0
58
  - Tokenizers 0.20.0