|
--- |
|
language: |
|
- es |
|
license: gpl-3.0 |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: flisol-cba-martin-fierro |
|
results: [] |
|
widget: |
|
- text: "Aqui me pongo a cantar" |
|
example_title: "Martin Fierro" |
|
--- |
|
|
|
Hugging Face: IA Colaborativa |
|
============================= |
|
|
|
En este repositorio estar谩 disponible el c贸digo y modelo que entren茅 para la |
|
charla ["Hugging Face: IA Colaborativa"](https://eventol.flisol.org.ar/events/cordoba2023/activity/378/) |
|
del [FLISoL de C贸rdoba](https://cordoba.flisol.org.ar), Argentina, de 2023. |
|
|
|
Para inicializar el setup hace falta tener instalado y activado |
|
[`git-lfs`](https://git-lfs.com/). |
|
|
|
Pueden clonar el repositorio con: |
|
|
|
$ git clone https://huggingface.co/crscardellino/flisol-cba-martin-fierro |
|
|
|
Y luego crean el entorno e instalan los requerimientos. |
|
|
|
$ python -m venv flisol-venv |
|
$ source ./flisol-venv/bin/activate |
|
(flisol-venv) $ pip install -r requirements.txt |
|
|
|
El c贸digo est谩 probado con Python 3.10, pero deber铆a funcionar con Python >= |
|
3.8. En los requerimientos est谩 organizado para instalar |
|
[PyTorch](https://pytorch.org/) v2.0.0 para cpu, pero pueden ajustarlo para |
|
utilizar GPUs suponiendo que cumplan los requerimientos de CUDA. |
|
|
|
## Model Specifications (Auto Generated) |
|
|
|
This model is a fine-tuned version of |
|
[DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on the |
|
`./data/martin-fierro_train.txt` dataset. It achieves the following results on |
|
the evaluation set: |
|
|
|
- Loss: 3.9067 |
|
|
|
## Model description |
|
|
|
GPT-2 model finetuned on the poem ["El Gaucho Martin |
|
Fierro"](https://es.wikipedia.org/wiki/El_Gaucho_Mart%C3%ADn_Fierro) |
|
|
|
## Intended uses & limitations |
|
|
|
This was trained for the talk ["Hugging Face: IA |
|
Colaborativa"](https://eventol.flisol.org.ar/events/cordoba2023/activity/378/) @ |
|
[FLISoL de C贸rdoba](https://cordoba.flisol.org.ar), Argentina, 2023. |
|
|
|
## Training and evaluation data |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 2e-05 |
|
- train_batch_size: 8 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 10 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:-----:|:----:|:---------------:| |
|
| 4.3864 | 1.0 | 18 | 4.2025 | |
|
| 3.948 | 2.0 | 36 | 4.0440 | |
|
| 3.7962 | 3.0 | 54 | 3.9804 | |
|
| 3.6105 | 4.0 | 72 | 3.9458 | |
|
| 3.4444 | 5.0 | 90 | 3.9280 | |
|
| 3.3855 | 6.0 | 108 | 3.9192 | |
|
| 3.3142 | 7.0 | 126 | 3.9091 | |
|
| 3.2192 | 8.0 | 144 | 3.9074 | |
|
| 3.1615 | 9.0 | 162 | 3.9070 | |
|
| 3.1637 | 10.0 | 180 | 3.9067 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.28.1 |
|
- Pytorch 2.0.0+cpu |
|
- Datasets 2.11.0 |
|
- Tokenizers 0.13.3 |
|
|