merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
This is another "SOVL" style merge, this time using mlabonne/NeuralDaredevil-8B-abliterated.
Daredevil is the first abliterated model series i've tried that feels as smart as base llama-3-instruct while also being willing to give instructions to do all kinda of illegal things
Neural daredevil is trained further on the original abliterated model, which should result in a better experience in most scenarios. (A bandaid for the damage abliteration causes)
This model should do well in rp, I'm yet to test it (waiting for gguf files @_@)
Merge Method
This model was merged using the Model Stock merge method using mlabonne/NeuralDaredevil-8B-abliterated as a base.
Models Merged
The following models were included in the merge:
- mlabonne/NeuralDaredevil-8B-abliterated + ResplendentAI/BlueMoon_Llama3
- mlabonne/NeuralDaredevil-8B-abliterated + ResplendentAI/Smarts_Llama3
- mlabonne/NeuralDaredevil-8B-abliterated + ResplendentAI/Luna_Llama3
- mlabonne/NeuralDaredevil-8B-abliterated + ResplendentAI/Aura_Llama3
- mlabonne/NeuralDaredevil-8B-abliterated + ResplendentAI/RP_Format_QuoteAsterisk_Llama3
Configuration
The following YAML configuration was used to produce this model:
models:
- model: mlabonne/NeuralDaredevil-8B-abliterated+ResplendentAI/Aura_Llama3
- model: mlabonne/NeuralDaredevil-8B-abliterated+ResplendentAI/Smarts_Llama3
- model: mlabonne/NeuralDaredevil-8B-abliterated+ResplendentAI/Luna_Llama3
- model: mlabonne/NeuralDaredevil-8B-abliterated+ResplendentAI/BlueMoon_Llama3
- model: mlabonne/NeuralDaredevil-8B-abliterated+ResplendentAI/RP_Format_QuoteAsterisk_Llama3
merge_method: model_stock
base_model: mlabonne/NeuralDaredevil-8B-abliterated
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 72.22 |
AI2 Reasoning Challenge (25-Shot) | 69.11 |
HellaSwag (10-Shot) | 84.77 |
MMLU (5-Shot) | 69.02 |
TruthfulQA (0-shot) | 59.05 |
Winogrande (5-shot) | 78.30 |
GSM8k (5-shot) | 73.09 |
- Downloads last month
- 36
Model tree for saishf/Neural-SOVLish-Devil-8B-L3
Spaces using saishf/Neural-SOVLish-Devil-8B-L3 5
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard69.110
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard84.770
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard69.020
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard59.050
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard78.300
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard73.090