apwic's picture
End of training
77adf39 verified
|
raw
history blame
2.32 kB
metadata
language:
  - id
license: apache-2.0
base_model: LazarusNLP/IndoNanoT5-base
tags:
  - generated_from_trainer
datasets:
  - id_liputan6
metrics:
  - rouge
model-index:
  - name: liputan6-seq_bn-rf16
    results:
      - task:
          name: Summarization
          type: summarization
        dataset:
          name: id_liputan6 canonical
          type: id_liputan6
          config: canonical
          split: validation
          args: canonical
        metrics:
          - name: Rouge1
            type: rouge
            value: 27.6391

liputan6-seq_bn-rf16

This model is a fine-tuned version of LazarusNLP/IndoNanoT5-base on the id_liputan6 canonical dataset. It achieves the following results on the evaluation set:

  • Loss: 2.7479
  • Rouge1: 27.6391
  • Rouge2: 12.5407
  • Rougel: 23.5774
  • Rougelsum: 25.3376
  • Gen Len: 39.933

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 16
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5.0

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
2.6241 1.0 63 2.7534 23.3287 9.5988 20.1923 21.2916 33.387
2.228 2.0 126 2.7025 25.8033 10.8168 21.9451 23.4491 32.153
2.0615 3.0 189 2.6749 25.8887 10.7586 22.113 23.8997 30.873
1.9099 4.0 252 2.7197 26.5565 11.2255 22.6026 24.5495 31.524
1.8007 5.0 315 2.7479 26.9743 11.4843 22.9863 24.9284 33.854

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1