--- license: llama3 library_name: peft tags: - trl - sft - generated_from_trainer - openassistant-guanaco - ipex - Gaudi base_model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct datasets: - timdettmers/openassistant-guanaco model-index: - name: Not-so-bright-AGI-VAGO-Llama3-8B-Guanaco-v1 results: - task: type: text-generation dataset: name: ai2_arc type: ai2_arc metrics: - name: AI2 Reasoning Challenge type: AI2 Reasoning Challenge value: 66.89 - name: HellaSwag type: HellaSwag value: 82.32 - name: MMLU type: MMLU value: 66.04 - name: TruthfulQA type: TruthfulQA value: 63.48 - name: Winogrande type: Winogrande value: 74.98 source: name: Powered-by-Intel LLM Leaderboard url: https://huggingface.co/spaces/Intel/powered_by_intel_llm_leaderboard language: - en metrics: - accuracy - bertscore - bleu pipeline_tag: question-answering --- # yuriachermann/Not-so-bright-AGI-VAGO-Llama3-8B-Guanaco-v1 **Model Type:** Fine-Tuned **Model Base:** [VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct) **Datasets Used:** [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) **Author:** [Yuri Achermann](https://huggingface.co/yuriachermann) **Date:** July 30, 2024 ------------------------- ## Training procedure ### Training Hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 100 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 ### Framework versions - PEFT==0.11.1 - Transformers==4.41.2 - Pytorch==2.1.0.post0+cxx11.abi - Datasets==2.19.2 - Tokenizers==0.19.1 ------------------------- ## Intended uses & limitations **Primary Use Case:** The model is intended for generating human-like responses in conversational applications, like chatbots or virtual assistants. **Limitations:** The model may generate inaccurate or biased content as it reflects the data it was trained on. It is essential to evaluate the generated responses in context and use the model responsibly. ------------------------- ## Evaluation The evaluation platform consists of Gaudi Accelerators and Xeon CPUs running benchmarks from the [Eleuther AI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | |:-------:|:-----:|:---------:|:-----:|:----------:|:----------:| | 70.742 | 66.89 | 82.32 | 66.04 | 63.48 | 74.98 | ------------------------- ## Ethical Considerations The model may inherit biases present in the training data. It is crucial to use the model in a way that promotes fairness and mitigates potential biases. ------------------------- ## Acknowledgments This fine-tuning effort was made possible by the support of Intel, that provided the computing resources, and [Eduardo Alvarez](https://huggingface.co/eduardo-alvarez). Additional shout-out to the creators of the VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct model and the contributors to the timdettmers/openassistant-guanaco dataset. ------------------------- ## Contact Information For questions or feedback about this model, please contact **[Yuri Achermann](mailto:yuri.achermann@gmail.com)**. ------------------------- ## License This model is distributed under **Apache 2.0 License**.