Edit model card

mbart-en-id-smaller-indo-amr-parsing-translated-nafkhan-trial

This model was trained from scratch on the data dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7345
  • Smatch: 0.4022
  • Gen Len: 33.6

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 5
  • eval_batch_size: 5
  • seed: 42
  • gradient_accumulation_steps: 5
  • total_train_batch_size: 25
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: polynomial
  • lr_scheduler_warmup_steps: 200
  • num_epochs: 3.0
  • label_smoothing_factor: 0.1

Training results

Training Loss Epoch Step Validation Loss Smatch Gen Len
3.2887 0.9934 90 3.4007 0.3046 210.7333
1.5144 1.9978 181 1.8273 0.4078 29.0
1.3575 2.9801 270 1.7345 0.4022 33.6

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
12
Safetensors
Model size
394M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .