m-polignano-uniba
commited on
Commit
•
4e0e41b
1
Parent(s):
01f05ce
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama2
|
3 |
+
language:
|
4 |
+
- it
|
5 |
+
tags:
|
6 |
+
- text-generation-inference
|
7 |
+
---
|
8 |
+
<img src="https://i.ibb.co/6mHSRm3/llamantino53.jpg" alt="llamantino53" border="0" width="200px">
|
9 |
+
|
10 |
+
# Model Card for LLaMAntino-2-70b-hf-UltraChat-ITA
|
11 |
+
*Last Update: 02/02/2024*<br>
|
12 |
+
*Example of Use*: [Colab Notebook](https://colab.research.google.com/drive/1xUite70ANLQp8NwQE93jlI3epj_cpua7?usp=sharing)
|
13 |
+
<hr>
|
14 |
+
|
15 |
+
## Model description
|
16 |
+
|
17 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
18 |
+
|
19 |
+
**LLaMAntino-2-70b-hf-UltraChat-ITA** is a *Large Language Model (LLM)* that is an instruction-tuned version of **LLaMAntino-2-70b** (an italian-adapted **LLaMA 2 chat**).
|
20 |
+
This model aims to provide Italian NLP researchers with an improved model for italian dialogue use cases.
|
21 |
+
|
22 |
+
The model was trained using *QLora* and using as training data [UltraChat](https://github.com/thunlp/ultrachat) translated to the italian language using [Argos Translate](https://pypi.org/project/argostranslate/1.4.0/).
|
23 |
+
If you are interested in more details regarding the training procedure, you can find the code we used at the following link:
|
24 |
+
- **Repository:** https://github.com/swapUniba/LLaMAntino
|
25 |
+
|
26 |
+
**NOTICE**: the code has not been released yet, we apologize for the delay, it will be available asap!
|
27 |
+
|
28 |
+
- **Developed by:** Pierpaolo Basile, Elio Musacchio, Marco Polignano, Lucia Siciliani, Giuseppe Fiameni, Giovanni Semeraro
|
29 |
+
- **Funded by:** PNRR project FAIR - Future AI Research
|
30 |
+
- **Compute infrastructure:** [Leonardo](https://www.hpc.cineca.it/systems/hardware/leonardo/) supercomputer
|
31 |
+
- **Model type:** LLaMA-2
|
32 |
+
- **Language(s) (NLP):** Italian
|
33 |
+
- **License:** Llama 2 Community License
|
34 |
+
- **Finetuned from model:** [swap-uniba/meta-llama/Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf)
|
35 |
+
|
36 |
+
## Prompt Format
|
37 |
+
|
38 |
+
This prompt format based on the [LLaMA 2 prompt template](https://gpus.llm-utils.org/llama-2-prompt-template/) adapted to the italian language was used:
|
39 |
+
|
40 |
+
```python
|
41 |
+
" [INST]<<SYS>>\n" \
|
42 |
+
"Sei un assistente disponibile, rispettoso e onesto di nome Llamantino. " \
|
43 |
+
"Rispondi sempre nel modo più utile possibile, pur essendo sicuro. " \
|
44 |
+
"Le risposte non devono includere contenuti dannosi, non etici, razzisti, sessisti, tossici, pericolosi o illegali. " \
|
45 |
+
"Assicurati che le tue risposte siano socialmente imparziali e positive. " \
|
46 |
+
"Se una domanda non ha senso o non è coerente con i fatti, spiegane il motivo invece di rispondere in modo non corretto. " \
|
47 |
+
"Se non conosci la risposta a una domanda, non condividere informazioni false.\n" \
|
48 |
+
"<</SYS>>\n\n" \
|
49 |
+
f"{user_msg_1} [/INST] {model_answer_1} </s> <s> [INST] {user_msg_2}[/INST] {model_answer_2} </s> ... <s> [INST] {user_msg_N} [/INST] {model_answer_N} </s>"
|
50 |
+
```
|
51 |
+
|
52 |
+
We recommend using the same prompt in inference to obtain the best results!
|
53 |
+
|
54 |
+
## How to Get Started with the Model
|
55 |
+
|
56 |
+
Below you can find an example of model usage:
|
57 |
+
|
58 |
+
```python
|
59 |
+
from transformers import AutoTokenizer
|
60 |
+
import transformers
|
61 |
+
import torch
|
62 |
+
import os
|
63 |
+
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3"
|
64 |
+
|
65 |
+
model = "swap-uniba/LLaMAntino-2-70b-hf-UltraChat-ITA"
|
66 |
+
|
67 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
68 |
+
tokenizer.add_special_tokens({"pad_token":"<unk>"})
|
69 |
+
tokenizer.chat_template = "{% set ns = namespace(i=0) %}" \
|
70 |
+
"{% for message in messages %}" \
|
71 |
+
"{% if message['role'] == 'user' and ns.i == 0 %}" \
|
72 |
+
"{{ bos_token +' [INST] <<SYS>>\n' }}" \
|
73 |
+
"{{ 'Sei un assistente disponibile, rispettoso e onesto di nome Llamantino. ' }}" \
|
74 |
+
"{{ 'Rispondi sempre nel modo più utile possibile, pur essendo sicuro. ' }}" \
|
75 |
+
"{{ 'Le risposte non devono includere contenuti dannosi, non etici, razzisti, sessisti, tossici, pericolosi o illegali. ' }}" \
|
76 |
+
"{{ 'Assicurati che le tue risposte siano socialmente imparziali e positive. ' }}" \
|
77 |
+
"{{ 'Se una domanda non ha senso o non è coerente con i fatti, spiegane il motivo invece di rispondere in modo non corretto. ' }}" \
|
78 |
+
"{{ 'Se non conosci la risposta a una domanda, non condividere informazioni false.\n' }}" \
|
79 |
+
"{{ '<</SYS>>\n\n' }}" \
|
80 |
+
"{{ message['content'] + ' [/INST]' }}" \
|
81 |
+
"{% elif message['role'] == 'user' and ns.i != 0 %} " \
|
82 |
+
"{{ bos_token + ' [INST] ' + message['content'] + ' [/INST]' }}" \
|
83 |
+
"{% elif message['role'] == 'assistant' %}" \
|
84 |
+
"{{ ' ' + message['content'] + ' ' + eos_token + ' ' }}" \
|
85 |
+
"{% endif %}" \
|
86 |
+
"{% set ns.i = ns.i+1 %}" \
|
87 |
+
"{% endfor %}"
|
88 |
+
|
89 |
+
|
90 |
+
|
91 |
+
pipe = transformers.pipeline(model=model,
|
92 |
+
device_map="balanced",
|
93 |
+
tokenizer=tokenizer,
|
94 |
+
return_full_text=False, # langchain expects the full text
|
95 |
+
task='text-generation',
|
96 |
+
max_new_tokens=512, # max number of tokens to generate in the output
|
97 |
+
temperature=0.8 #temperature
|
98 |
+
)
|
99 |
+
messages = [{"role": "user", "content": "Cosa sono i word embeddings?"}]
|
100 |
+
text = tokenizer.apply_chat_template(messages, tokenize=False)
|
101 |
+
|
102 |
+
sequences = pipe(text)
|
103 |
+
for seq in sequences:
|
104 |
+
print(f"{seq['generated_text']}")
|
105 |
+
|
106 |
+
```
|
107 |
+
|
108 |
+
If you are facing issues when loading the model, you can try to load it **Quantized**:
|
109 |
+
|
110 |
+
```python
|
111 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True)
|
112 |
+
```
|
113 |
+
|
114 |
+
*Note*:
|
115 |
+
1) The model loading strategy above requires the [*bitsandbytes*](https://pypi.org/project/bitsandbytes/) and [*accelerate*](https://pypi.org/project/accelerate/) libraries
|
116 |
+
2) The Tokenizer, by default, adds at the beginning of the prompt the '\<BOS\>' token. If that is not the case, add as a starting token the *\<s\>* string.
|
117 |
+
|
118 |
+
## Evaluation
|
119 |
+
|
120 |
+
<!-- This section describes the evaluation protocols and provides the results. -->
|
121 |
+
|
122 |
+
*Coming soon*!
|
123 |
+
|
124 |
+
## Citation
|
125 |
+
|
126 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
127 |
+
|
128 |
+
If you use this model in your research, please cite the following:
|
129 |
+
|
130 |
+
```bibtex
|
131 |
+
@misc{basile2023llamantino,
|
132 |
+
title={LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian Language},
|
133 |
+
author={Pierpaolo Basile and Elio Musacchio and Marco Polignano and Lucia Siciliani and Giuseppe Fiameni and Giovanni Semeraro},
|
134 |
+
year={2023},
|
135 |
+
eprint={2312.09993},
|
136 |
+
archivePrefix={arXiv},
|
137 |
+
primaryClass={cs.CL}
|
138 |
+
}
|
139 |
+
```
|
140 |
+
|
141 |
+
*Notice:* Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. [*License*](https://ai.meta.com/llama/license/)
|