cartesinus's picture
update model card README.md
f364a26
|
raw
history blame
2.08 kB
metadata
license: mit
tags:
  - generated_from_trainer
metrics:
  - bleu
model-index:
  - name: iva_mt_wslot-m2m100_1.2B-0.1.0
    results: []

iva_mt_wslot-m2m100_1.2B-0.1.0

This model is a fine-tuned version of facebook/m2m100_1.2B on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3082
  • Bleu: 62.4604
  • Gen Len: 21.2847

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
0.2744 1.0 5091 0.2555 58.5119 21.0728
0.1829 2.0 10182 0.2475 59.7364 21.0769
0.1124 3.0 15273 0.2499 61.3552 21.06
0.0783 4.0 20364 0.2597 61.6618 21.2402
0.0496 5.0 25455 0.2698 62.1942 21.2901
0.0318 6.0 30546 0.2798 61.9068 21.3399
0.0204 7.0 35637 0.2893 61.7753 21.3102
0.0138 8.0 40728 0.2979 62.3925 21.3238
0.009 9.0 45819 0.3034 62.4942 21.2516
0.0058 10.0 50910 0.3082 62.4604 21.2847

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu116
  • Datasets 2.10.1
  • Tokenizers 0.13.2