Edit model card

MaziyarPanahi/WizardLM-2-8x22B-GGUF

Description

MaziyarPanahi/WizardLM-2-8x22B-GGUF contains GGUF format model files for mistral-community/Mixtral-8x22B-v0.1.

How to download

You can download only the quants you need instead of cloning the entire repository as follows:

huggingface-cli download MaziyarPanahi/WizardLM-2-8x22B-GGUF --local-dir . --include '*Q2_K*gguf'

On Windows:

huggingface-cli download MaziyarPanahi/WizardLM-2-8x22B-GGUF --local-dir . --include *Q4_K_S*gguf

Load sharded model

llama_load_model_from_file will detect the number of files and will load additional tensors from the rest of files.

llama.cpp/main -m WizardLM-2-8x22B.Q2_K-00001-of-00005.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 1024 -e

Prompt template

{system_prompt}
USER: {prompt}
ASSISTANT: </s>

or

A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, 
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: {prompt} ASSISTANT: </s>......
Downloads last month
57
GGUF
Model size
141B params
Architecture
llama

2-bit

Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for KingNish/WizardLM2-2bit

Quantized
this model