m-polignano-uniba commited on
Commit
b091d6b
1 Parent(s): 4da4c3b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -15,7 +15,7 @@ tags:
15
 
16
  <!-- Provide a quick summary of what the model is/does. -->
17
 
18
- **LLaMAntino-2-70b-hf-UltraChat-ITA** is a *Large Language Model (LLM)* that is an instruction-tuned version of **LLaMAntino-2-70b** (an italian-adapted **LLaMA 2 chat**).
19
  This model aims to provide Italian NLP researchers with an improved model for italian dialogue use cases.
20
 
21
  The model was trained using *QLora* and using as training data [UltraChat](https://github.com/thunlp/ultrachat) translated to the italian language using [Argos Translate](https://pypi.org/project/argostranslate/1.4.0/).
@@ -37,7 +37,7 @@ If you are interested in more details regarding the training procedure, you can
37
  This prompt format based on the [LLaMA 2 prompt template](https://gpus.llm-utils.org/llama-2-prompt-template/) adapted to the italian language was used:
38
 
39
  ```python
40
- " [INST]<<SYS>>\n" \
41
  "Sei un assistente disponibile, rispettoso e onesto di nome Llamantino. " \
42
  "Rispondi sempre nel modo più utile possibile, pur essendo sicuro. " \
43
  "Le risposte non devono includere contenuti dannosi, non etici, razzisti, sessisti, tossici, pericolosi o illegali. " \
@@ -45,7 +45,7 @@ This prompt format based on the [LLaMA 2 prompt template](https://gpus.llm-utils
45
  "Se una domanda non ha senso o non è coerente con i fatti, spiegane il motivo invece di rispondere in modo non corretto. " \
46
  "Se non conosci la risposta a una domanda, non condividere informazioni false.\n" \
47
  "<</SYS>>\n\n" \
48
- f"{user_msg_1} [/INST] {model_answer_1} </s> <s> [INST] {user_msg_2}[/INST] {model_answer_2} </s> ... <s> [INST] {user_msg_N} [/INST] {model_answer_N} </s>"
49
  ```
50
 
51
  We recommend using the same prompt in inference to obtain the best results!
 
15
 
16
  <!-- Provide a quick summary of what the model is/does. -->
17
 
18
+ **LLaMAntino-2-70b-hf-UltraChat-ITA** is a *Large Language Model (LLM)* that is an instruction-tuned version of **LLaMAntino-2-70b** (an italian-adapted **LLaMA 2 - 70B**).
19
  This model aims to provide Italian NLP researchers with an improved model for italian dialogue use cases.
20
 
21
  The model was trained using *QLora* and using as training data [UltraChat](https://github.com/thunlp/ultrachat) translated to the italian language using [Argos Translate](https://pypi.org/project/argostranslate/1.4.0/).
 
37
  This prompt format based on the [LLaMA 2 prompt template](https://gpus.llm-utils.org/llama-2-prompt-template/) adapted to the italian language was used:
38
 
39
  ```python
40
+ " [INST] <<SYS>>\n" \
41
  "Sei un assistente disponibile, rispettoso e onesto di nome Llamantino. " \
42
  "Rispondi sempre nel modo più utile possibile, pur essendo sicuro. " \
43
  "Le risposte non devono includere contenuti dannosi, non etici, razzisti, sessisti, tossici, pericolosi o illegali. " \
 
45
  "Se una domanda non ha senso o non è coerente con i fatti, spiegane il motivo invece di rispondere in modo non corretto. " \
46
  "Se non conosci la risposta a una domanda, non condividere informazioni false.\n" \
47
  "<</SYS>>\n\n" \
48
+ f"{user_msg_1} [/INST] {model_answer_1} </s> <s> [INST] {user_msg_2} [/INST] {model_answer_2} </s> ... <s> [INST] {user_msg_N} [/INST] {model_answer_N} </s>"
49
  ```
50
 
51
  We recommend using the same prompt in inference to obtain the best results!