squareoctopus commited on
Commit
f9e8790
1 Parent(s): 238fbc7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -10
README.md CHANGED
@@ -4,33 +4,42 @@ base_model: DeepESP/gpt2-spanish
4
  tags:
5
  - generated_from_trainer
6
  model-index:
7
- - name: O
8
  results: []
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
- should probably proofread and complete it, then remove this comment. -->
13
 
14
- # O
15
 
16
- This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
 
 
 
 
 
 
 
 
 
17
  It achieves the following results on the evaluation set:
18
  - Loss: 2.2787
19
 
20
- ## Model description
21
-
22
- More information needed
23
 
24
  ## Intended uses & limitations
25
 
26
- More information needed
 
27
 
28
  ## Training and evaluation data
29
 
30
- More information needed
 
31
 
32
  ## Training procedure
33
 
 
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
 
4
  tags:
5
  - generated_from_trainer
6
  model-index:
7
+ - name: Ocampo
8
  results: []
9
  ---
10
 
11
+ # Silvina Ocampo - Modelo de Exploración
 
12
 
 
13
 
14
+ ## Model description
15
+
16
+ Este modelo es una versión fine-tuneada de [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish),
17
+ con un dataset de Autores y Autoras Latinoamericanos/as curado por Karen Palacio (https://github.com/karen-pal/borges), del cual seleccionamos a Silvina Ocampo.
18
+ Fue creado durante un taller enfocado en exploración de LLMs por miembros de LAIA (laia.ar), siguiendo en grupo el Taller de Adaptación de Modelos de Lenguaje de Fundación Via Libre (https://github.com/nanom/llm_adaptation_workshop)
19
+
20
+ This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish),
21
+ with a dataset of Latin American authors by Karen Palacio (https://github.com/karen-pal/borges), of which Silvina Ocampo's work was chosen.
22
+ It was created during a workshop focused on LLM exploration by members of LAIA (laia.ar), by group-following the LLM Adaptation Workshop by Fundación Via Libre (https://github.com/nanom/llm_adaptation_workshop)
23
+
24
  It achieves the following results on the evaluation set:
25
  - Loss: 2.2787
26
 
 
 
 
27
 
28
  ## Intended uses & limitations
29
 
30
+ El modelo fue fine-tuneado como ejercicio educativo.
31
+ This model was fine tuned as an educational exercise.
32
 
33
  ## Training and evaluation data
34
 
35
+ - ver https://github.com/karen-pal/borges para los datasets. En este ejercicio, también agregamos un cuento no disponible en los datos originales.
36
+ - see the link above for the datasets
37
 
38
  ## Training procedure
39
 
40
+ - ver https://github.com/nanom/llm_adaptation_workshop para el procedimiento de entrenamiento.
41
+ - see the link above for the training procedure.
42
+
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training: