fernandofinardi
commited on
Commit
•
316a7a6
1
Parent(s):
503b198
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,103 @@
|
|
1 |
---
|
2 |
-
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
library_name: peft
|
3 |
+
base_model: codellama/CodeLlama-7b-Instruct-hf
|
4 |
---
|
5 |
+
|
6 |
+
**Lloro 7B**
|
7 |
+
|
8 |
+
|
9 |
+
Lloro, developed by Semantix Research Labs , is a language Model that was trained to effectively perform Portuguese Data Analysis. It is a fine-tuned version of codellama/CodeLlama-7b-Instruct-hf, that was trained on synthetic datasets . The fine-tuning process was performed using the QLORA metodology on a GPU V100 with 16 GB of RAM.
|
10 |
+
|
11 |
+
|
12 |
+
|
13 |
+
**Model description**
|
14 |
+
|
15 |
+
|
16 |
+
Model type: A 7B parameter fine-tuned on synthetic datasets.
|
17 |
+
|
18 |
+
Language(s) (NLP): Primarily Portuguese, but the model is capable to understand English as well
|
19 |
+
|
20 |
+
Finetuned from model: codellama/CodeLlama-7b-Instruct-hf
|
21 |
+
|
22 |
+
|
23 |
+
|
24 |
+
**What is Lloro's intended use(s)?**
|
25 |
+
|
26 |
+
|
27 |
+
Lloro is built for data analysis in Portuguese contexts .
|
28 |
+
|
29 |
+
Input : Text
|
30 |
+
|
31 |
+
Output : Text (Code)
|
32 |
+
|
33 |
+
|
34 |
+
|
35 |
+
**Params**
|
36 |
+
Training Parameters
|
37 |
+
| Params | Training Data | Examples | Tokens | LR |
|
38 |
+
|----------------------------------|---------------------------------|---------------------------------|----------|--------|
|
39 |
+
| 7B | Pairs synthetic instructions/code | 28907 | 3 031 188 | 1e-5 |
|
40 |
+
|
41 |
+
|
42 |
+
**Model Sources**
|
43 |
+
|
44 |
+
|
45 |
+
Repository:https://gitlab.com/semantix-labs/generative-ai/lloroConnect
|
46 |
+
|
47 |
+
Dataset Repository: https://gitlab.com/semantix-labs/generative-ai/lloro-datasetsConnect
|
48 |
+
|
49 |
+
|
50 |
+
Model Dates Lloro was trained between November 2023 and January 2024.
|
51 |
+
|
52 |
+
|
53 |
+
|
54 |
+
**Performance**
|
55 |
+
| Modelo | LLM as Judge | Code Bleu Score | Rouge-L | CodeBert- Precision | CodeBert-Recall | CodeBert-F1 | CodeBert-F3 |
|
56 |
+
|----------------|--------------|------------------|---------|----------------------|-----------------|-------------|-------------|
|
57 |
+
| GPT 3.5 | 99.65% | 0.2936 | 0.1371 | 0.7326 | 0.6679 | 0.698 | 0.6736 |
|
58 |
+
| Instruct -Base | 91.16% | 0.2487 | 0.1146 | 0.6997 | 0.6473 | 0.6713 | 0.6518 |
|
59 |
+
| Instruct -FT | 97.74% | 0.3264 | 0.3602 | 0.7942 | 0.8178 | 0.8042 | 0.8147 |
|
60 |
+
|
61 |
+
|
62 |
+
**Training Infos:**
|
63 |
+
The following hyperparameters were used during training:
|
64 |
+
|
65 |
+
| Parameter | Value |
|
66 |
+
|---------------------------|----------------------|
|
67 |
+
| learning_rate | 1e-5 |
|
68 |
+
| weight_decay | 0.0001 |
|
69 |
+
| train_batch_size | 1 |
|
70 |
+
| eval_batch_size | 1 |
|
71 |
+
| seed | 42 |
|
72 |
+
| optimizer | Adam - paged_adamw_32bit |
|
73 |
+
| lr_scheduler_type | cosine |
|
74 |
+
| lr_scheduler_warmup_ratio | 0.03 |
|
75 |
+
| num_epochs | 5.0 |
|
76 |
+
|
77 |
+
**QLoRA hyperparameters**
|
78 |
+
The following parameters related with the Quantized Low-Rank Adaptation and Quantization were used during training:
|
79 |
+
|
80 |
+
| Parameter | Value |
|
81 |
+
|------------------|---------|
|
82 |
+
| lora_r | 16 |
|
83 |
+
| lora_alpha | 64 |
|
84 |
+
| lora_dropout | 0.1 |
|
85 |
+
| storage_dtype | "nf4" |
|
86 |
+
| compute_dtype | "float16"|
|
87 |
+
|
88 |
+
|
89 |
+
**Experiments**
|
90 |
+
| Model | Epochs | Overfitting | Final Epochs | Training Hours | CO2 Emission (Kg) |
|
91 |
+
|-----------------------|--------|-------------|--------------|-----------------|--------------------|
|
92 |
+
| Code Llama Instruct | 1 | No | 1 | 8.1 | 1.337 |
|
93 |
+
| Code Llama Instruct | 5 | Yes | 3 | 45.6 | 9.12 |
|
94 |
+
|
95 |
+
**Framework versions**
|
96 |
+
|
97 |
+
| Library | Version |
|
98 |
+
|---------------|-----------|
|
99 |
+
| bitsandbytes | 0.40.2 |
|
100 |
+
| Datasets | 2.14.3 |
|
101 |
+
| Pytorch | 2.0.1 |
|
102 |
+
| Tokenizers | 0.14.1 |
|
103 |
+
| Transformers | 4.34.0 |
|