--- library_name: transformers, pe tags: - trl - sft - generated_from_trainer base_model: google/gemma-7b license: apache-2.0 language: - es --- # Model Card for Model ID ## Model Details ### Model Description This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the generator dataset. This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [Hacendado](https://huggingface.co/hacendado) and QA-legal-refugees team - **Language(s) (NLP):** [Spanish] - **Finetuned from model [optional]:** [google/gemma-7b](https://huggingface.co/google/gemma-7b) ## Uses ### Direct Use The primary objective of this model is to facilitate question answering (QA) tasks pertaining to Spanish refugee legislation. With its refined understanding of the nuances and intricacies of this legal domain ### Out-of-Scope Use Misuse includes any application that promotes unethical practices, misinterprets refugee law, or uses the model for malicious purposes. The model is not designed to replace professional legal advice. ## Bias, Risks, and Limitations The model, while powerful, has limitations inherent to AI, including biases present in the training data. It may not cover all nuances of refugee regulations or adapt to changes in law without updates. ## Training Details ### Training Data The dataset used was [instruct-legal-refugiados-es](https://huggingface.co/datasets/somosnlp/instruct-legal-refugiados-es) We wanted to make a conversation model so we investigated the base model prompt in order to make conversational base on [chatml format](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/ai-services/openai/includes/chat-markup-language.md#working-with-chat-markup-language-chatml) we identified the special tokens so the model could understand the different roles in the conversation Example ``` <|im_start|>system You are Gemma.<|im_end|> <|im_start|>user Hello, how are you?<|im_end|> <|im_start|>assistant I'm doing great. How can I help you today?<|im_end|>\n ``` so we used [Phil Schmid's gemma chatml tokenizer](https://huggingface.co/philschmid/gemma-tokenizer-chatml) to adapt our dataset for training ### Training Procedure The training was done using RTX 4090 from Vast.ai with PeRF and Lora #### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 66 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3