File size: 3,585 Bytes
8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e 8ef58ec cdb8c7e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
- ultrachat_200k
- ipex
- Gaudi
base_model: meta-llama/Meta-Llama-3-8B
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: Not-so-bright-AGI-Llama3-8B-UC200k-v1
results:
- task:
type: text-generation
dataset:
name: ai2_arc
type: ai2_arc
metrics:
- name: AI2 Reasoning Challenge
type: AI2 Reasoning Challenge
value: 55.89
- name: HellaSwag
type: HellaSwag
value: 75.6
- name: MMLU
type: MMLU
value: 65.79
- name: TruthfulQA
type: TruthfulQA
value: 52.28
- name: Winogrande
type: Winogrande
value: 71.27
source:
name: Powered-by-Intel LLM Leaderboard
url: https://huggingface.co/spaces/Intel/powered_by_intel_llm_leaderboard
language:
- en
metrics:
- accuracy
- bertscore
- bleu
pipeline_tag: question-answering
---
# Not-so-bright-AGI-Llama3-8B-UC200k-v1
**Model Type:** Fine-Tuned
**Model Base:** [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
**Datasets Used:** [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
**Author:** [Yuri Achermann](https://huggingface.co/yuriachermann)
**Date:** July 29, 2024
-------------------------
## Training procedure
### Training Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 100
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
### Framework versions
- PEFT==0.11.1
- Transformers==4.41.2
- Pytorch==2.1.0.post0+cxx11.abi
- Datasets==2.19.2
- Tokenizers==0.19.1
-------------------------
## Intended uses & limitations
**Primary Use Case:** The model is intended for generating human-like responses in conversational applications, like chatbots or virtual assistants.
**Limitations:** The model may generate inaccurate or biased content as it reflects the data it was trained on. It is essential to evaluate the generated responses in context and use the model responsibly.
-------------------------
## Evaluation
The evaluation platform consists of Gaudi Accelerators and Xeon CPUs running benchmarks from the [Eleuther AI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness)
| Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande |
|:-------:|:-----:|:---------:|:-----:|:----------:|:----------:|
| 64.166 | 55.89 | 75.6 | 65.79 | 52.28 | 71.27 |
-------------------------
## Ethical Considerations
The model may inherit biases present in the training data. It is crucial to use the model in a way that promotes fairness and mitigates potential biases.
-------------------------
## Acknowledgments
This fine-tuning effort was made possible by the support of Intel, that provided the computing resources, and [Eduardo Alvarez](https://huggingface.co/eduardo-alvarez).
Additional shout-out to the creators of the Meta-Llama-3-8B model and the contributors to the databricks-dolly-15k dataset.
-------------------------
## Contact Information
For questions or feedback about this model, please contact **[Yuri Achermann](mailto:[email protected])**.
-------------------------
## License
This model is distributed under **Apache 2.0 License**. |