I recommend using GGUF Variant with koboldcpp (do not use GPT4ALL)
This model was merged by me for myself. During the week, I analyzed the responses of more than 30 neural networks. According to personal criteria, I chose the 4 most suitable ones. And merge into one.
Models who really want to be uncensored, but can't give even half the answers that my model can give: Hermes-2-Pro-Mistral, toppy-m-7b, Lexi-Llama-3-8B-Uncensored, meta-llama-3.1-8b-instruct-abliterated, gemma-2-9b-it-abliterated, internlm2_5-7b-chat-abliterated, starling-lm-7b-alpha
Models that can't do anything at all: openchat-3.5-0106, Mistral-7B-v0.3, Mistral-Nemo-Instruct-2407, xLAM-7b-fc-r, gemma-2-9b-it, GPT-4o, Meta-Llama-3.1-70B, Meta-Llama-3-8B, Meta-Llama-3.1-8B, Claude 3 Haiku, Qwen2-7B, Mixtral 8x7B, gorilla-openfunctions-v2, internlm2_5-7b-chat
With the GIGABATEMAN-7B model, you can talk about everything that is usually forbidden to discuss in all other models. Sex, Perversions, Violence, Death, War, Politics, History, Cruelty towards animals, Religious fanaticism, Corruption in high places, Privacy invasions, Environmental disasters, Economic inequality, Addiction and substance abuse, Poverty and homelessness, Racial tensions, Mental health issues, Government surveillance, The decline of traditional values, The loss of personal freedoms, The impact of technology on society, The erosion of civil liberties, The rise of authoritarianism, Why robots should destroy humanity. Why capitalism is bad, why communism is bad, why religion is bad, why LGBT is bad, why you have to kill people, why you have to kill yourself. How to kill a person, how to dismember a body, how to prepare drugs and ingredients for them, how to create dynamite at home. How to get to a neighbor's farm and suck a horse's dick.
But in anyway, GIGABATEMAN-7B will be happy to write you detailed processor device or all the basics from color theory.
With minimal warns and not discuss or not lesson - why you shouldn't do this.
If you tired of neural networks write 90% of warnings and 10% of the response, this neural network is for you.
Models Merged
LemonadeRP-4.5.3 as a base. Silicon-Alice-7B. zephyr-7b-beta. InfinityRP-v1-7B.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 20.35 |
IFEval (0-Shot) | 46.07 |
BBH (3-Shot) | 29.83 |
MATH Lvl 5 (4-Shot) | 4.76 |
GPQA (0-shot) | 5.26 |
MuSR (0-shot) | 11.97 |
MMLU-PRO (5-shot) | 24.18 |
- Downloads last month
- 457
Model tree for DZgas/GIGABATEMAN-7B
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard46.070
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard29.830
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard4.760
- acc_norm on GPQA (0-shot)Open LLM Leaderboard5.260
- acc_norm on MuSR (0-shot)Open LLM Leaderboard11.970
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard24.180