M4-ai/NeuralReyna-Mini-1.8B-v0.3-GGUF
Quantized GGUF model files for NeuralReyna-Mini-1.8B-v0.3 from M4-ai
Name | Quant method | Size |
---|---|---|
neuralreyna-mini-1.8b-v0.3.fp16.gguf | fp16 | 3.68 GB |
neuralreyna-mini-1.8b-v0.3.q2_k.gguf | q2_k | 846.57 MB |
neuralreyna-mini-1.8b-v0.3.q3_k_m.gguf | q3_k_m | 1.02 GB |
neuralreyna-mini-1.8b-v0.3.q4_k_m.gguf | q4_k_m | 1.22 GB |
neuralreyna-mini-1.8b-v0.3.q5_k_m.gguf | q5_k_m | 1.38 GB |
neuralreyna-mini-1.8b-v0.3.q6_k.gguf | q6_k | 1.58 GB |
neuralreyna-mini-1.8b-v0.3.q8_0.gguf | q8_0 | 1.96 GB |
Original Model Card:
NeuralReyna-Mini-1.8B-v0.3
Description
Taken aloobun/Reyna-Mini-1.8B-v0.2 and further fine-tuned it using DPO using the argilla/OpenHermes2.5-dpo-binarized-alpha.
This model has capabilities in coding, math, science, roleplay, and function calling.
This model was trained on OpenAI's ChatML prompt format.
Evaluation
Coming soon
Contributions
Thanks to @aloobun and @Locutusque for their contributions to this model.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 41.77 |
AI2 Reasoning Challenge (25-Shot) | 35.58 |
HellaSwag (10-Shot) | 61.13 |
MMLU (5-Shot) | 44.22 |
TruthfulQA (0-shot) | 41.99 |
Winogrande (5-shot) | 60.93 |
GSM8k (5-shot) | 6.75 |
- Downloads last month
- 51
Model tree for afrideva/NeuralReyna-Mini-1.8B-v0.3-GGUF
Base model
M4-ai/NeuralReyna-Mini-1.8B-v0.3Datasets used to train afrideva/NeuralReyna-Mini-1.8B-v0.3-GGUF
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard35.580
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard61.130
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard44.220
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard41.990
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard60.930
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard6.750