Aura v2
The second version of the Aura line is a direct improvement over the original. Expect poetic and eloquent outputs with real emotion behind them.
I recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise.
If you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs.
This model responds best to ChatML for multiturn conversations.
This model, like all other Mistral based models, is compatible with a Mistral compatible mmproj file for multimodal vision capabilities in KoboldCPP.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 75.36 |
AI2 Reasoning Challenge (25-Shot) | 73.46 |
HellaSwag (10-Shot) | 88.64 |
MMLU (5-Shot) | 63.97 |
TruthfulQA (0-shot) | 75.17 |
Winogrande (5-shot) | 84.45 |
GSM8k (5-shot) | 66.49 |
- Downloads last month
- 38
Model tree for ResplendentAI/Aura_v2_7B
Base model
ResplendentAI/Paradigm_7BSpaces using ResplendentAI/Aura_v2_7B 5
Collection including ResplendentAI/Aura_v2_7B
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard73.460
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard88.640
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard63.970
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard75.170
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard84.450
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard66.490