vLLM: Unknwon quantization method

#5
by yaronr - opened

Hi
Running this on vLLM, returns the following error:

ValueError: Unknown quantization method: . Must be one of ['aqlm', 'awq', 'deepspeedfp', 'tpu_int8', 'fp8', 'fbgemm_fp8', 'modelopt', 'marlin', 'gguf', 'gptq_marlin_24', 'gptq_marlin', 'awq_marlin', 'gptq', 'compressed-tensors', 'bitsandbytes', 'qqq', 'experts_int8', 'neuron_quant', 'ipex'].

Sign up or log in to comment