Model Card for LLaMAntino-2-7b-evalita
Last Update: 22/01/2024
Model description
LLaMAntino-2-7b-evalita is a Large Language Model (LLM) that is an instruction-tuned version of LLaMAntino-2-7b (an italian-adapted LLaMA 2). This model aims to provide Italian NLP researchers with a tool to tackle tasks such as sentiment analysis and text categorization.
The model was trained following the methodology used for Alpaca and using as training data EVALITA 2023 tasks formatted in an instruction-following style. If you are interested in more details regarding the training procedure, you can find the code we used at the following link:
- Repository: https://github.com/swapUniba/LLaMAntino
NOTICE: the code has not been released yet, we apologize for the delay, it will be available asap!
- Developed by: Pierpaolo Basile, Elio Musacchio, Marco Polignano, Lucia Siciliani, Giuseppe Fiameni, Giovanni Semeraro
- Funded by: PNRR project FAIR - Future AI Research
- Compute infrastructure: Leonardo supercomputer
- Model type: LLaMA 2
- Language(s) (NLP): Italian
- License: Llama 2 Community License
- Finetuned from model: swap-uniba/LLaMAntino-2-7b-hf-ITA
Prompt Format
This prompt format based on the Alpaca model was used for fine-tuning:
"Di seguito è riportata un'istruzione che descrive un'attività, abbinata ad un input che fornisce ulteriore informazione. " \
"Scrivi una risposta che soddisfi adeguatamente la richiesta.\n\n" \
f"### Istruzione:\n{instruction}\n\n### Input:\n{input}\n\n### Risposta:\n{response}"
We recommend using this same prompt in inference to obtain the best results!
How to Get Started with the Model
Below you can find an example of model usage:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "swap-uniba/LLaMAntino-2-7b-hf-evalita-ITA"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
instruction_text = "Categorizza le emozioni espresse nel testo fornito in input o determina l'assenza di emozioni. " \
"Puoi classificare il testo come neutrale o identificare una o più delle seguenti emozioni: " \
"rabbia, anticipazione, disgusto, paura, gioia, tristezza, sorpresa, fiducia, amore."
input_text = "Non me lo aspettavo proprio, ma oggi è stata una bellissima giornata, sono contentissimo!"
prompt = "Di seguito è riportata un'istruzione che descrive un'attività, accompagnata da un input che aggiunge ulteriore informazione. " \
f"Scrivi una risposta che completi adeguatamente la richiesta.\n\n" \
f"### Istruzione:\n{instruction_text}\n\n" \
f"### Input:\n{input_text}\n\n" \
f"### Risposta:\n"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(input_ids=input_ids)
print(tokenizer.batch_decode(outputs.detach().cpu().numpy()[:, input_ids.shape[1]:], skip_special_tokens=True)[0])
If you are facing issues when loading the model, you can try to load it quantized:
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True)
Note: The model loading strategy above requires the bitsandbytes and accelerate libraries
Evaluation
Coming soon!
Citation
If you use this model in your research, please cite the following:
@misc{basile2023llamantino,
title={LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian Language},
author={Pierpaolo Basile and Elio Musacchio and Marco Polignano and Lucia Siciliani and Giuseppe Fiameni and Giovanni Semeraro},
year={2023},
eprint={2312.09993},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Notice: Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. License
- Downloads last month
- 18