Edit model card

opus-mt-en-ar-finetuned-en-to-ar

This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ar on the un_multi dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8133
  • Bleu: 64.6767
  • Gen Len: 17.595

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 16
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
No log 1.0 50 0.7710 64.3416 17.4
No log 2.0 100 0.7569 63.9546 17.465
No log 3.0 150 0.7570 64.7484 17.385
No log 4.0 200 0.7579 65.4073 17.305
No log 5.0 250 0.7624 64.8939 17.325
No log 6.0 300 0.7696 65.1257 17.45
No log 7.0 350 0.7747 65.527 17.395
No log 8.0 400 0.7791 65.1357 17.52
No log 9.0 450 0.7900 65.3812 17.415
0.3982 10.0 500 0.7925 65.7346 17.39
0.3982 11.0 550 0.7951 65.1267 17.62
0.3982 12.0 600 0.8040 64.6874 17.495
0.3982 13.0 650 0.8069 64.7788 17.52
0.3982 14.0 700 0.8105 64.6701 17.585
0.3982 15.0 750 0.8120 64.7111 17.58
0.3982 16.0 800 0.8133 64.6767 17.595

Framework versions

  • Transformers 4.19.2
  • Pytorch 1.11.0+cu113
  • Datasets 2.2.2
  • Tokenizers 0.12.1
Downloads last month
24
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train meghazisofiane/opus-mt-en-ar-finetuned-en-to-ar

Evaluation results