RylanSchaeffer's picture
End of training
8a483e3 verified
|
raw
history blame
2.88 kB
metadata
license: gemma
base_model: google/gemma-2-2b
tags:
  - trl
  - sft
  - generated_from_trainer
model-index:
  - name: collapse_gemma-2-2b_hs2_accumulatesubsample_iter14_sftsd0
    results: []

collapse_gemma-2-2b_hs2_accumulatesubsample_iter14_sftsd0

This model is a fine-tuned version of google/gemma-2-2b on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2169
  • Num Input Tokens Seen: 4987864

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 8e-06
  • train_batch_size: 8
  • eval_batch_size: 16
  • seed: 0
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant_with_warmup
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
No log 0 0 1.3909 0
1.3601 0.0537 5 1.2756 273168
1.0514 0.1075 10 1.2201 542832
0.8862 0.1612 15 1.2153 806384
0.7575 0.2149 20 1.2508 1072264
0.8064 0.2686 25 1.2672 1345384
0.7043 0.3224 30 1.2443 1610128
0.7069 0.3761 35 1.2386 1880920
0.5027 0.4298 40 1.2439 2148696
0.5934 0.4835 45 1.2404 2411376
0.5024 0.5373 50 1.2125 2683616
0.4872 0.5910 55 1.2366 2946376
0.4414 0.6447 60 1.2230 3215768
0.5175 0.6985 65 1.2200 3485104
0.4704 0.7522 70 1.2172 3752080
0.5442 0.8059 75 1.2149 4021976
0.4186 0.8596 80 1.2177 4295184
0.458 0.9134 85 1.2097 4559664
0.4203 0.9671 90 1.2142 4823808

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1