Edit model card

Custom quantizations of deepseek-coder-v2-instruct optimized for cpu inference.

This iq4xm one uses GGML TYPE IQ_4_XS 4bit in combination with q8_0 bit so it runs fast with minimal loss and takes advantage of int8 optimizations on most newer server cpus.

While it required custom code to make, it is compatible with standard llama.cpp from github or just search nisten in lmstudio.

The following 4bit version is the one I use myself, it gets 17tps on 64 arm cores.

You don't need to consolidate the files anymore, just point llama-cli to the first one and it'll handle the rest fine.

Then to run in commandline interactive mode (prompt.txt file is optional) just do:

./llama-cli --temp 0.4 -m deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf -c 32000 -co -cnv -i -f prompt.txt
deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf
deepseek_coder_v2_cpu_iq4xm.gguf-00002-of-00004.gguf
deepseek_coder_v2_cpu_iq4xm.gguf-00003-of-00004.gguf
deepseek_coder_v2_cpu_iq4xm.gguf-00004-of-00004.gguf

To download the models MUCH faster on linux apt install aria2, on mac: brew install aria2

sudo apt install -y aria2

aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf \
  https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf

aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00002-of-00004.gguf \
  https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00002-of-00004.gguf

aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00003-of-00004.gguf \
  https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00003-of-00004.gguf

aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00004-of-00004.gguf \
  https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00004-of-00004.gguf

And for downloading the Q8_0 version converted in the most lossless way possible from hf bf16 download these:


aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00001-of-00006.gguf \
  https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00001-of-00006.gguf

aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00002-of-00006.gguf \
  https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00002-of-00006.gguf

aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00003-of-00006.gguf \
  https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00003-of-00006.gguf

aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00004-of-00006.gguf \
  https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00004-of-00006.gguf

aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00005-of-00006.gguf \
  https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00005-of-00006.gguf

aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00006-of-00006.gguf \
  https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00006-of-00006.gguf

The use of DeepSeek-Coder-V2 Base/Instruct models is subject to the Model License. DeepSeek-Coder-V2 series (including Base and Instruct) supports commercial use. It's a permissive license that only restrict use for military purposes, harming minors or patent trolling.

Enjoy and remember to accelerate!

-Nisten

Downloads last month
321
GGUF
Model size
236B params
Architecture
deepseek2

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for nisten/deepseek-coder-v2-inst-cpu-optimized-gguf