Edit model card

Llama3-portuguese-luana-8b-instruct

This model was trained with a superset of 290,000 chat in Portuguese. The model comes to help fill the gap in models in Portuguese. Tuned from the Llama3 8B, the model was adjusted mainly for chat.

How to use

FULL MODEL : A100

HALF MODEL: L4

8bit or 4bit : T4 or V100

You can use the model in its normal form up to 4-bit quantization. Below we will use both approaches. Remember that verbs are important in your prompt. Tell your model how to act or behave so that you can guide them along the path of their response. Important points like these help models (even smaller models like 8b) to perform much better.

!pip install -q -U transformers
!pip install -q -U accelerate
!pip install -q -U bitsandbytes

from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model = AutoModelForCausalLM.from_pretrained("rhaymison/Llama3-portuguese-luana-8b-instruct", device_map= {"": 0})
tokenizer = AutoTokenizer.from_pretrained("rhaymison/Llama3-portuguese-luana-8b-instruct")
model.eval()

You can use with Pipeline.


from transformers import pipeline
pipe = pipeline("text-generation",
                model=model,
                tokenizer=tokenizer,
                do_sample=True,
                max_new_tokens=256,
                num_beams=2,
                temperature=0.3,
                top_k=50,
                top_p=0.95,
                early_stopping=True,
                pad_token_id=tokenizer.eos_token_id,
                )


def format_prompt(question:str):
    system_prompt = "Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido."

    return f"""<|begin_of_text|><|start_header_id|>system<|end_header_id|>
    { system_prompt }<|eot_id|><|start_header_id|>user<|end_header_id|>
    { question }<|eot_id|><|start_header_id|>assistant<|end_header_id|>"""

prompt =  format_prompt("Me explique quem eram os Romanos")
result = pipe(prompt)
result[0]["generated_text"].split("assistant<|end_header_id|>")[1]



#Os romanos eram um povo antigo que habitava a península italiana, particularmente na região que hoje é conhecida como Itália. Eles estabeleceram o Império Romano,
#que se tornou uma das maiores e mais poderosas civilizações da história. Os romanos eram conhecidos por suas conquistas militares, sua arquitetura e engenharia
#impressionantes e sua influência duradoura na cultura ocidental.
#Os romanos eram uma sociedade complexa que consistia em várias classes sociais, incluindo senadores, cavaleiros, plebeus e escravos.
#Eles tinham um sistema de governo baseado em uma república, onde o poder era dividido entre o Senado e a Assembléia do Povo.
#Os romanos eram conhecidos por suas conquistas militares, que os levaram a expandir seu império por toda a Europa, Ásia e África.
#Eles estabeleceram uma rede de estradas, pontes e outras estruturas que facilitaram a comunicação e o comércio.

If you are having a memory problem such as "CUDA Out of memory", you should use 4-bit or 8-bit quantization. For the complete model in colab you will need the A100. If you want to use 4bits or 8bits, T4 or L4 will already solve the problem.

4bits example

from transformers import BitsAndBytesConfig
import torch
nb_4bit_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16,
    bnb_4bit_use_double_quant=True
)

model = AutoModelForCausalLM.from_pretrained(
    base_model,
    quantization_config=bnb_config,
    device_map={"": 0}
)

Open Portuguese LLM Leaderboard Evaluation Results

Detailed results can be found here and on the 🚀 Open Portuguese LLM Leaderboard

Metric Value
Average 68.15
ENEM Challenge (No Images) 69
BLUEX (No Images) 51.74
OAB Exams 47.56
Assin2 RTE 89.24
Assin2 STS 72.87
FaQuAD NLI 68.94
HateBR Binary 85.93
PT Hate Speech Binary 64.16
tweetSentBR 63.91

Comments

Any idea, help or report will always be welcome.

email: [email protected]

Downloads last month
16
Safetensors
Model size
8.03B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for rhaymison/Llama3-portuguese-luana-8b-instruct

Finetuned
(441)
this model

Dataset used to train rhaymison/Llama3-portuguese-luana-8b-instruct

Space using rhaymison/Llama3-portuguese-luana-8b-instruct 1

Evaluation results