ArabicTranslator / README.md
PontifexMaximus's picture
update model card README.md
c9841c5
|
raw
history blame
2.46 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - opus_infopankki
metrics:
  - bleu
model-index:
  - name: opus-mt-ar-en-finetuned-ar-to-en
    results:
      - task:
          name: Sequence-to-sequence Language Modeling
          type: text2text-generation
        dataset:
          name: opus_infopankki
          type: opus_infopankki
          args: ar-en
        metrics:
          - name: Bleu
            type: bleu
            value: 44.5107

opus-mt-ar-en-finetuned-ar-to-en

This model is a fine-tuned version of Helsinki-NLP/opus-mt-ar-en on the opus_infopankki dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9956
  • Bleu: 44.5107
  • Gen Len: 14.6465

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-06
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 11
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
1.696 1.0 950 1.4125 35.5899 15.0109
1.4952 2.0 1900 1.2969 38.1745 14.7736
1.3821 3.0 2850 1.2175 39.8192 14.7649
1.2941 4.0 3800 1.1571 40.9966 14.7446
1.2409 5.0 4750 1.1097 41.9215 14.7406
1.1938 6.0 5700 1.0721 42.868 14.6819
1.1634 7.0 6650 1.0440 43.4749 14.6536
1.1355 8.0 7600 1.0223 43.9275 14.6809
1.1161 9.0 8550 1.0075 44.2378 14.6471
1.1103 10.0 9500 0.9987 44.4293 14.639
1.1043 11.0 10450 0.9956 44.5107 14.6465

Framework versions

  • Transformers 4.19.2
  • Pytorch 1.7.1+cu110
  • Datasets 2.2.2
  • Tokenizers 0.12.1