---
license: other
license_name: microsoft-research-license
tags:
- merge
- not-for-all-audiences
- gguf
- iMat
---
# DarkForest 20B v2.0 iMat GGUF
"The universe is a dark forest. Every civilization is an armed hunter stalking through the trees like a ghost, gently pushing aside branches that block the path and trying to tread without sound. Even breathing is done with care. The hunter has to be careful, because everywhere in the forest are stealthy hunters like him."- Liu Cixin
Quantized from fp16 with love. Importance Matrix calculated using Q8_0 quant and wiki.train.raw
For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
All quants are verified working prior to uploading to repo for your safety and convenience.
Importance matrix quantizations are a work in progress, IQ3 and above is recommended for best results.
Tip: Pick a size that can fit in your GPU while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well.
Original model card can be found [here](https://huggingface.co/TeeZee/DarkForest-20B-v2.0)
Previous Model Card
Continuation of an ongoing initiative to bring the latest and greatest models to consumer hardware through SOTA techniques that reduce VRAM overhead.
After testing the new important matrix quants for 11b and 8x7b models and being able to run them on machines without a dedicated GPU, we are now exploring the middleground - 20b.
❗❗Need a different quantization/model? Please open a community post and I'll get back to you - thanks ❗❗
UPDATE 3/4/24: Newer quants ([IQ4_XS](https://github.com/ggerganov/llama.cpp/pull/5747), IQ2_S, etc) are confirmed working in Koboldcpp as of version [1.60](https://github.com/LostRuins/koboldcpp/releases/tag/v1.60) - if you run into any issues kindly let me know.
IQ3_S has been generated after PR [#5829](https://github.com/ggerganov/llama.cpp/pull/5829) was merged. This should provide a significant speed boost even if you are offloading to CPU.
(Credits to [TeeZee](https://huggingface.co/TeeZee/) for the original model and [ikawrakow](https://github.com/ikawrakow) for the stellar work on IQ quants)
---
# DarkForest 20B v2.0
![image/png](https://huggingface.co/TeeZee/DarkForest-20B-v2.0/resolve/main/DarkForest-20B-v2.0.jpg)
## Model Details
- To create this model two step procedure was used. First a new 20B model was created using [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b)
and [KoboldAI/LLaMA2-13B-Erebus-v3](https://huggingface.co/KoboldAI/LLaMA2-13B-Erebus-v3) , deatils of the merge in [darkforest_v2_step1.yml](https://huggingface.co/TeeZee/DarkForest-20B-v2.0/resolve/main/darkforest_v2_step1.yml)
- then [jebcarter/psyonic-cetacean-20B](https://huggingface.co/jebcarter/psyonic-cetacean-20B)
- and [TeeZee/BigMaid-20B-v1.0](https://huggingface.co/TeeZee/BigMaid-20B-v1.0) was used to produce the final model, merge config in [darkforest_v2_step2.yml](https://huggingface.co/TeeZee/DarkForest-20B-v2.0/resolve/main/darkforest_v2_step2.yml)
- The resulting model has approximately 20 billion parameters.
**Warning: This model can produce NSFW content!**
## Results
- main difference to v1.0 - model has much better sense of humor.
- produces SFW nad NSFW content without issues, switches context seamlessly.
- good at following instructions.
- good at tracking multiple characters in one scene.
- very creative, scenarios produced are mature and complicated, model doesn't shy from writing about PTSD, menatal issues or complicated relationships.
- NSFW output is more creative and suprising than typical limaRP output.
- definitely for mature audiences, not only because of vivid NSFW content but also because of overall maturity of stories it produces.
- This is NOT Harry Potter level storytelling.
All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel: