|
--- |
|
license: llama2 |
|
language: |
|
- it |
|
tags: |
|
- text-generation-inference |
|
--- |
|
# Model Card for LLaMAntino-2-chat-7b-ITA |
|
|
|
## Model description |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
**LLaMAntino-2-chat-7b** is a *Large Language Model (LLM)* that is an italian-adapted **LLaMA 2 chat**. |
|
This model aims to provide Italian NLP researchers with a base model for italian dialogue use cases. |
|
|
|
The model was trained using *QLora* and using as training data [clean_mc4_it medium](https://huggingface.co/datasets/gsarti/clean_mc4_it/viewer/medium). |
|
If you are interested in more details regarding the training procedure, you can find the code we used at the following link: |
|
- **Repository:** https://github.com/swapUniba/LLaMAntino |
|
|
|
**NOTICE**: the code has not been released yet, we apologize for the delay, it will be available asap! |
|
|
|
- **Developed by:** Pierpaolo Basile, Elio Musacchio, Marco Polignano, Lucia Siciliani, Giuseppe Fiameni, Giovanni Semeraro |
|
- **Funded by:** PNRR project FAIR - Future AI Research |
|
- **Compute infrastructure:** [Leonardo](https://www.hpc.cineca.it/systems/hardware/leonardo/) supercomputer |
|
- **Model type:** LLaMA 2 chat |
|
- **Language(s) (NLP):** Italian |
|
- **License:** Llama 2 Community License |
|
- **Finetuned from model:** [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) |
|
|
|
## How to Get Started with the Model |
|
|
|
Below you can find an example of model usage: |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_id = "swap-uniba/LLaMAntino-2-chat-7b-hf-ITA" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
model = AutoModelForCausalLM.from_pretrained(model_id) |
|
|
|
prompt = "Scrivi qui un possibile prompt" |
|
|
|
input_ids = tokenizer(prompt, return_tensors="pt").input_ids |
|
outputs = model.generate(input_ids=input_ids) |
|
|
|
print(tokenizer.batch_decode(outputs.detach().cpu().numpy()[:, input_ids.shape[1]:], skip_special_tokens=True)[0]) |
|
``` |
|
|
|
If you are facing issues when loading the model, you can try to load it quantized: |
|
|
|
```python |
|
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True) |
|
``` |
|
|
|
*Note*: The model loading strategy above requires the [*bitsandbytes*](https://pypi.org/project/bitsandbytes/) and [*accelerate*](https://pypi.org/project/accelerate/) libraries |
|
|
|
## Citation |
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
If you use this model in your research, please cite the following: |
|
|
|
```bibtex |
|
@misc{basile2023llamantino, |
|
title={LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian Language}, |
|
author={Pierpaolo Basile and Elio Musacchio and Marco Polignano and Lucia Siciliani and Giuseppe Fiameni and Giovanni Semeraro}, |
|
year={2023}, |
|
eprint={2312.09993}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |