Edit model card

My upload speeds have been cooked and unstable lately.
Realistically I'd need to move to get a better provider.
If you want and you are able to, you can support that endeavor and others here (Ko-fi). I apologize for disrupting your experience.

#llama-3 #roleplay

GGUF-IQ-Imatrix quants for Endevor/InfinityRP-v2-8B.
Back at it!

Lewdiculous: "Personally I still prefer the first version based on Mistral, this one hates asterisks."

These quants have been done after the fixes from llama.cpp/pull/6920.
Use KoboldCpp version 1.64 or higher.

Prompt formatting...
Alpaca prompt format recommended.
A safe starting SillyTavern preset can be found here (simple).

Original model information by the author:

image/png

The idea is the same as InfinityRP v1, but this one is Llama 3 with 16k ctx! Have fun...

Prompt format: Alpaca.

"You are now in roleplay chat mode. Engage in an endless chat, always with a creative response. Follow lengths very precisely and create paragraphs accurately. Always wait your turn, next actions and responses. Your internal thoughts are wrapped with ` marks."

User Message Prefix = ### Input:

Assistant Message Prefix = ### Response:

System Message Prefix = ### Instruction:

Turn on "Include Names" (optional)

Text Length: (use on your System Prompt or ### Response:)

Response: (length = medium) <- [tiny, micro, short, medium, long, enormous, huge, massive, humongous]

Example:

example

Downloads last month
332
GGUF
Model size
8.03B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including Lewdiculous/InfinityRP-v2-8B-GGUF-IQ-Imatrix