YAML Metadata
Error:
"datasets[0]" with value "mlsum - es" is not valid. If possible, use a dataset id from https://hf.co/datasets.
beto2beto-mlsum
This model was trained on the Spanish section of MLSum: https://paperswithcode.com/sota/abstractive-text-summarization-on-mlsum.
Hyperparameters
{
"dataset_config": "es",
"dataset_name": "mlsum",
"do_eval": true,
"do_predict": true,
"do_train": true,
"fp16": true,
"max_target_length": 64,
"num_train_epochs": 10,
"per_device_eval_batch_size": 4,
"per_device_train_batch_size": 4,
"predict_with_generate": true,
"sagemaker_container_log_level": 20,
"sagemaker_program": "run_summarization.py",
"seed": 7,
"summary_column": "summary",
"text_column": "text"
}
Usage
Results
metric | score |
---|---|
validation_loss | 2.5021677017211914 |
validation_rouge1 | 26.1256 |
validation_rouge2 | 9.2552 |
validation_rougeL | 21.4899 |
validation_rougeLsum | 21.8194 |
test_loss | 2.57672381401062 |
test_rouge1 | 25.8639 |
test_rouge2 | 8.911 |
test_rougeL | 21.2426 |
test_rougeLsum | 21.5859 |
- Downloads last month
- 27
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Evaluation results
- rouge1 on mlsum-esself-reported25.864
- rouge2 on mlsum-esself-reported8.911
- rougeL on mlsum-esself-reported21.243
- rougeLsum on mlsum-esself-reported21.586