datasets: | |
- chargoddard/Open-Platypus-Chat | |
language: | |
- en | |
tags: | |
- llama | |
Experimental ReLoRA-trained model using the OpenPlatypus dataset. Ran for one epoch, with three lora restarts. | |
Not recommended for use yet. Mostly tossing this up for testing. | |
Base model was [llama2-22b-blocktriangular](https://huggingface.co/chargoddard/llama2-22b-blocktriangular). | |
Relevant training parameters: | |
``` | |
adapter: qlora | |
load_in_4bit: true | |
lora_r: 32 | |
lora_alpha: 16 | |
lora_dropout: 0.001 | |
lora_target_linear: true | |
relora_steps: 150 | |
relora_warmup_steps: 10 | |
gradient_accumulation_steps: 2 | |
micro_batch_size: 3 | |
``` | |
Uses the same prompt format as [Ypotryll-22b](https://huggingface.co/chargoddard/ypotryll-22b-epoch2-qlora). | |
Prefix messages with `" ***System:"`, `" ***Query:"`, or `" ***Response:"`, paying attention to whitespace. | |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) | |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) | |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__platypus-2-22b-relora) | |
| Metric | Value | | |
|-----------------------|---------------------------| | |
| Avg. | 52.21 | | |
| ARC (25-shot) | 57.68 | | |
| HellaSwag (10-shot) | 82.44 | | |
| MMLU (5-shot) | 55.33 | | |
| TruthfulQA (0-shot) | 43.61 | | |
| Winogrande (5-shot) | 77.35 | | |
| GSM8K (5-shot) | 6.6 | | |
| DROP (3-shot) | 42.46 | | |