File size: 4,537 Bytes
7729017
 
70205df
 
 
 
 
7729017
70205df
01da337
70205df
01da337
70205df
 
 
01da337
70205df
 
 
01da337
70205df
 
 
 
 
9dadabc
 
 
 
 
 
 
 
 
 
 
 
769d8e7
 
 
 
ae839da
769d8e7
 
 
 
ae839da
769d8e7
 
 
 
ae839da
f39b6ca
 
 
 
ae839da
f39b6ca
 
 
 
769d8e7
70205df
 
 
 
 
c2a56ad
 
 
70205df
 
1e35d17
43166cf
1d5405d
769d8e7
1d5405d
 
547548c
 
 
 
 
9a480ee
547548c
 
 
 
 
 
 
 
 
 
 
1d5405d
547548c
 
 
 
 
1d5405d
547548c
 
 
 
 
 
 
 
1d5405d
547548c
43166cf
934ec51
 
 
 
 
1269097
 
934ec51
 
 
 
 
 
 
b8efbc1
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
---
license: apache-2.0
datasets:
- irlab-udc/alpaca_data_galician
language:
- gl
- en
---

# Llama3-8B Adapter Fine-Tuned for Galician language

This repository contains a Lora adapter to finetune Meta's LLaMA 3-8B Instruct LLM to Galician language.

## Model Description

This Lora adapter has been specifically fine-tuned to understand and generate text in Galician. It was refined using a modified version of the [irlab-udc/alpaca_data_galician](https://huggingface.co/datasets/irlab-udc/alpaca_data_galician) dataset, enriched with synthetic data to enhance its text generation and comprehension capabilities in specific contexts.

### Technical Details

- **Base Model**: Unsloth Meta's LLaMA 3 8B Instruct (https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit)
- **Fine-Tuning Platform**: LLaMA Factory
- **Infrastructure**: Finisterrae III, CESGA
- **Dataset**: [irlab-udc/alpaca_data_galician](https://huggingface.co/datasets/irlab-udc/alpaca_data_galician) (with modifications)
- **Fine-Tuning Objective**: To improve text comprehension and generation in Galician.

### Trainning parameters

The project is still in the testing phase, and the training parameters will continue to vary to find the values that result in a more accurate model. Currently, the model is trained with a set of **5000 random entries** from the dataset and the following values:

- num_train_epochs=3.0
- finetuning_type="lora"
- per_device_train_batch_size=2
- gradient_accumulation_steps=4
- lr_scheduler_type="cosine"
- learning_rate=5e-5
- max_grad_norm=1.0

## Testing The Model

Below, we have several examples of question-answer pairs comparing the pure Llama3 model with the Llama3 model fine tunned for the Galician language.

*Llama3-8B* 👎
```bash
User: Cal é a capital de Canadá?
Assistant: Não! Cal não é a capital de Canadá. A capital de Canadá é Ottawa.
```
*Llama3-8B-Galician-Chat* ✅ 👍
```bash
User: Cal é a capital de Canadá?
Assistant: A capital de Canadá é Ottawa.
```
*Llama3-8B* 👎
```bash
User: Cantos habitantes ten Galicia?
Assistant: Según el Instituto Nacional de Estadística (INE) de España, la población residente en Galicia al 1 de enero de 2021 era de aproximadamente 2.79 millones de personas.
```
*Llama3-8B-Galician-Chat* ✅ 👍
```bash
User: Cantos habitantes ten Galicia?
Assistant: Segundo as últimas estimacións, Galicia ten uns 2,8 millóns de habitantes.
```

## How to Use the Model

To use this model, follow the example code provided below. Ensure you have the necessary libraries installed (e.g., Hugging Face's `transformers`).

### Installation
```bash
git clone https://huggingface.co/abrahammg/Llama3-8B-Galician-Chat
```

```bash
pip install transformers bitsandbytes "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git" llmtuner xformers
```

### Run the model

```bash
from llmtuner import ChatModel
from llmtuner.extras.misc import torch_gc

chat_model = ChatModel(dict(
  model_name_or_path="unsloth/llama-3-8b-Instruct-bnb-4bit", # use bnb-4bit-quantized Llama-3-8B-Instruct model
  adapter_name_or_path="./",            # load the saved LoRA adapters
  finetuning_type="lora",                  # same to the one in training
  template="llama3",                     # same to the one in training
  quantization_bit=4,                    # load 4-bit quantized model
  use_unsloth=True,                     # use UnslothAI's LoRA optimization for 2x faster generation
))

messages = []
while True:
  query = input("\nUser: ")
  if query.strip() == "exit":
    break

  if query.strip() == "clear":
    messages = []
    torch_gc()
    print("History has been removed.")
    continue

  messages.append({"role": "user", "content": query})     # add query to messages
  print("Assistant: ", end="", flush=True)
  response = ""
  for new_text in chat_model.stream_chat(messages):      # stream generation
    print(new_text, end="", flush=True)
    response += new_text
  print()
  messages.append({"role": "assistant", "content": response}) # add response to messages

torch_gc()
```
## Citation

```markdown
@misc{Llama3-8B-Galician-Chat,
  author = {Abraham Martínez Gracia},
  organization={Galicia Supercomputing Center},
  title = {Llama3-8B-Galician-Chat: A finetuned chat model for Galician language},
  year = {2024},
  url = {https://huggingface.co/abrahammg/Llama3-8B-Galician-Chat}
}
```

## Acknowledgement

- [meta-llama/llama3](https://github.com/meta-llama/llama3)
- [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)