HPC-Coder-v2 Quantizations
Collection
4 items
•
Updated
This is the HPC-Coder-v2-6.7b model with 4 bit quantized weights in the GGUF format that can be used with llama.cpp. Refer to the original model card for more details on the model.
See the llama.cpp repo for installation instructions. You can then use the model as:
llama-cli --hf-repo hpcgroup/hpc-coder-v2-1.3b-Q4_K_S-GGUF --hf-file hpc-coder-v2-1.3b-q4_k_s.gguf -r "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:" --in-prefix "\n" --in-suffix "\n### Response:\n" -c 8096 -p "your prompt here"
Base model
hpcgroup/hpc-coder-v2-1.3b