cartesinus's picture
Update README.md
243b802
|
raw
history blame
2.24 kB
metadata
license: mit
tags:
  - generated_from_trainer
datasets:
  - cartesinus/iva_mt_wslot
metrics:
  - bleu
model-index:
  - name: iva_mt_wslot-m2m100_418M-en-es
    results:
      - task:
          name: Sequence-to-sequence Language Modeling
          type: text2text-generation
        dataset:
          name: iva_mt_wslot
          type: iva_mt_wslot
          config: en-es
          split: validation
          args: en-es
        metrics:
          - name: Bleu
            type: bleu
            value: 69.2836
language:
  - en
  - es
pipeline_tag: translation

iva_mt_wslot-m2m100_418M-en-es

This model is a fine-tuned version of facebook/m2m100_418M on the iva_mt_wslot dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0115
  • Bleu: 69.2836
  • Gen Len: 20.2064

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 7
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
0.0135 1.0 2104 0.0122 66.8284 20.2851
0.009 2.0 4208 0.0112 68.1164 20.1501
0.0067 3.0 6312 0.0110 68.256 20.0603
0.0051 4.0 8416 0.0110 68.7002 20.1219
0.0037 5.0 10520 0.0112 68.699 20.2733
0.0027 6.0 12624 0.0113 68.9916 20.209
0.0023 7.0 14728 0.0115 69.2836 20.2064

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.0+cu118
  • Datasets 2.11.0
  • Tokenizers 0.13.3