Edit model card

franco-arabic

This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3227

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 3 1.2456
No log 2.0 6 1.0399
No log 3.0 9 0.8359
No log 4.0 12 0.7516
No log 5.0 15 0.6830
No log 6.0 18 0.6363
No log 7.0 21 0.5988
No log 8.0 24 0.5653
No log 9.0 27 0.5347
No log 10.0 30 0.5024
No log 11.0 33 0.4733
No log 12.0 36 0.4499
No log 13.0 39 0.4313
No log 14.0 42 0.4149
No log 15.0 45 0.4000
No log 16.0 48 0.3872
No log 17.0 51 0.3766
No log 18.0 54 0.3668
No log 19.0 57 0.3589
No log 20.0 60 0.3522
No log 21.0 63 0.3464
No log 22.0 66 0.3419
No log 23.0 69 0.3379
No log 24.0 72 0.3344
No log 25.0 75 0.3311
No log 26.0 78 0.3285
No log 27.0 81 0.3262
No log 28.0 84 0.3245
No log 29.0 87 0.3234
No log 30.0 90 0.3227

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.4
  • Tokenizers 0.13.3
Downloads last month
2
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for mohamedtolba/franco-arabic

Base model

google-t5/t5-small
Finetuned
this model

Space using mohamedtolba/franco-arabic 1