Lewdiculous's picture
Update README.md
1b5d05a verified
|
raw
history blame
3.63 kB
metadata
license: cc-by-nc-4.0
language:
  - en
inference: false
tags:
  - roleplay
  - llama3
  - sillytavern

#roleplay #sillytavern #llama3

My GGUF-IQ-Imatrix quants for Nitral-AI/Poppy_Porpoise-0.85-L3-8B.

"Isn't Poppy the cutest Porpoise?"

Quantization process:
For future reference, these quants have been done after the fixes from #6920 have been merged.
Since the original model was already an FP16, imatrix data was generated from the FP16-GGUF and the conversions as well.
If you noticed any issues let me know in the discussions.

General usage:
Use the latest version of KoboldCpp.
Remember that you can also use --flashattention on KoboldCpp now even with non-RTX cards for reduced VRAM usage.
For 8GB VRAM GPUs, I recommend the Q4_K_M-imat quant for up to 12288 context sizes.
For 12GB VRAM GPUs, the Q5_K_M-imat quant will give you a great size/quality balance.

Resources:
You can find out more about how each quant stacks up against each other and their types here and here, respectively.

Presets:
Some compatible SillyTavern presets can be found here (Poppy-0.85 Presets) or here (Virt's Roleplay Presets).

Personal-support:
I apologize for disrupting your experience.
Currently I'm working on moving for a better internet provider.
If you want and you are able to...
You can spare some change over here (Ko-fi).

Author-support:
You can support the author at their own page.

Original model information:

Note: Updated Presets!

"Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.

image/png

Recomended ST Presets:(Updated for 0.85) Porpoise Presets

If you want to use vision functionality:

  • You must use the latest versions of Koboldcpp.

To use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo. Llava MMProj

  • You can load the mmproj by using the corresponding section in the interface:

image/png