abrahammg commited on
Commit
9dadabc
1 Parent(s): 57d3a84

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -0
README.md CHANGED
@@ -23,6 +23,18 @@ This LLM model has been specifically fine-tuned to understand and generate text
23
  - **Dataset**: [irlab-udc/alpaca_data_galician](https://huggingface.co/datasets/irlab-udc/alpaca_data_galician) (with modifications)
24
  - **Fine-Tuning Objective**: To improve text comprehension and generation in Galician.
25
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  ## How to Use the Model
27
 
28
  To use this model, follow the example code provided below. Ensure you have the necessary libraries installed (e.g., Hugging Face's `transformers`).
 
23
  - **Dataset**: [irlab-udc/alpaca_data_galician](https://huggingface.co/datasets/irlab-udc/alpaca_data_galician) (with modifications)
24
  - **Fine-Tuning Objective**: To improve text comprehension and generation in Galician.
25
 
26
+ ### Trainning parameters
27
+
28
+ The project is still in the testing phase, and the training parameters will continue to vary to find the values that result in a more accurate model. Currently, the model is trained with a set of **5000 random entries** from the dataset and the following values:
29
+
30
+ - num_train_epochs=3.0
31
+ - finetuning_type="lora"
32
+ - per_device_train_batch_size=2
33
+ - gradient_accumulation_steps=4
34
+ - lr_scheduler_type="cosine"
35
+ - learning_rate=5e-5
36
+ - max_grad_norm=1.0
37
+
38
  ## How to Use the Model
39
 
40
  To use this model, follow the example code provided below. Ensure you have the necessary libraries installed (e.g., Hugging Face's `transformers`).