--- language: - pt license: apache-2.0 library_name: transformers tags: - portugues - portuguese - QA - instruct - phi base_model: microsoft/Phi-3-mini-4k-instruct datasets: - rhaymison/superset pipeline_tag: text-generation --- # Llama3-portuguese-luana-8b-instruct
This model was trained with a superset of 300,000 instructions in Portuguese. The model comes to help fill the gap in models in Portuguese. Tuned from the microsoft/Phi-3-mini-4k. # How to use ### FULL MODEL : A100 ### HALF MODEL: L4 ### 8bit or 4bit : T4 or V100 You can use the model in its normal form up to 4-bit quantization. Below we will use both approaches. Remember that verbs are important in your prompt. Tell your model how to act or behave so that you can guide them along the path of their response. Important points like these help models (even smaller models like 4b) to perform much better. ```python !pip install -q -U transformers !pip install -q -U accelerate !pip install -q -U bitsandbytes from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model = AutoModelForCausalLM.from_pretrained("rhaymison/phi-3-portuguese-tom-cat-4k-instructt", device_map= {"": 0}) tokenizer = AutoTokenizer.from_pretrained("rhaymison/phi-3-portuguese-tom-cat-4k-instruct") model.eval() ``` You can use with Pipeline. ```python from transformers import pipeline pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, do_sample=True, max_new_tokens=512, num_beams=2, temperature=0.3, top_k=50, top_p=0.95, early_stopping=True, pad_token_id=tokenizer.eos_token_id, ) def format_template(question:str): system_prompt = "Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido." return f"""