Edit model card

Model Card for LLaMAntino-2-13b-ITA

Last Update: 22/01/2024

Model description

LLaMAntino-2-13b is a Large Language Model (LLM) that is an italian-adapted LLaMA 2. This model aims to provide Italian NLP researchers with a base model for natural language generation tasks.

The model was trained using QLora and using as training data clean_mc4_it medium. If you are interested in more details regarding the training procedure, you can find the code we used at the following link:

NOTICE: the code has not been released yet, we apologize for the delay, it will be available asap!

  • Developed by: Pierpaolo Basile, Elio Musacchio, Marco Polignano, Lucia Siciliani, Giuseppe Fiameni, Giovanni Semeraro
  • Funded by: PNRR project FAIR - Future AI Research
  • Compute infrastructure: Leonardo supercomputer
  • Model type: LLaMA 2
  • Language(s) (NLP): Italian
  • License: Llama 2 Community License
  • Finetuned from model: meta-llama/Llama-2-13b-hf

How to Get Started with the Model

Below you can find an example of model usage:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "swap-uniba/LLaMAntino-2-13b-hf-ITA"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

prompt = "Scrivi qui un possibile prompt"

input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(input_ids=input_ids)

print(tokenizer.batch_decode(outputs.detach().cpu().numpy()[:, input_ids.shape[1]:], skip_special_tokens=True)[0])

If you are facing issues when loading the model, you can try to load it quantized:

model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True)

Note: The model loading strategy above requires the bitsandbytes and accelerate libraries

Citation

If you use this model in your research, please cite the following:

@misc{basile2023llamantino,
      title={LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian Language}, 
      author={Pierpaolo Basile and Elio Musacchio and Marco Polignano and Lucia Siciliani and Giuseppe Fiameni and Giovanni Semeraro},
      year={2023},
      eprint={2312.09993},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Notice: Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. License

Downloads last month
34
Safetensors
Model size
13B params
Tensor type
F32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including swap-uniba/LLaMAntino-2-13b-hf-ITA