Model Card for ohno-8x7B-GGUF
- Model creator: rAIfle
- Original model: ohno-8x7B-fp16
ohno-8x7B quantized with love.
Upload Notes: Wanted to give this one a spin after seeing its unique merge recipe, was curious about how Mixtral-8x7B-v0.1 case-briefs affected the output.
Starting out with Q5_K_M, taking requests for any other quants. All quantizations based on original fp16 model.
Any feedback is greatly appreciated!
Original Model Card
ohno-8x7b
this... will either be my magnum opus... or terrible. no inbetweens!
Post-test verdict: It's mostly braindamaged. Might be my settings or something, idk.
the ./output
mentioned below is my own merge using identical recipe as Envoid/Mixtral-Instruct-ITR-8x7B.
output_merge2
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using Envoid/Mixtral-Instruct-ITR-8x7B as a base.
Models Merged
The following models were included in the merge:
- ./output/ + /ai/LLM/tmp/pefts/daybreak-peft/mixtral-8x7b
- Envoid/Mixtral-Instruct-ITR-8x7B + Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
- Envoid/Mixtral-Instruct-ITR-8x7B + retrieval-bar/Mixtral-8x7B-v0.1_case-briefs
- NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss
Configuration
The following YAML configuration was used to produce this model:
models:
- model: ./output/+/ai/LLM/tmp/pefts/daybreak-peft/mixtral-8x7b
parameters:
density: 0.66
weight: 1.0
- model: Envoid/Mixtral-Instruct-ITR-8x7B+retrieval-bar/Mixtral-8x7B-v0.1_case-briefs
parameters:
density: 0.1
weight: 0.25
- model: Envoid/Mixtral-Instruct-ITR-8x7B+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
parameters:
density: 0.66
weight: 0.5
- model: NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss
parameters:
density: 0.15
weight: 0.3
merge_method: dare_ties
base_model: Envoid/Mixtral-Instruct-ITR-8x7B
dtype: float16