Edit model card

WinterGoddess-1.4x-70B-L2 IQ2-GGUF

Description

IQ2-GGUF quants of sophosympatheia/Aurora-Nights-70B-v1.0

Unlike regular GGUF quants this uses important matrix similar to Quip# to keep the quant from degrading too much even at 2bpw allowing you to run larger models on less powerful machines.

NOTE: Currently you will need experimental branches of Koboldcpp or Ooba for this to work.

More info about IQ2

Models

Models: IQ2-XS, IQ2-XXS

Regular GGUF Quants: Here

Prompt Format

Unclear

Contact

Kooten on discord

Downloads last month
10
GGUF
Model size
69B params
Architecture
llama

2-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including Kooten/Aurora-Nights-70B-v1.0-IQ2-GGUF