Why doesn't it work with bitsnbytes 8 or 4bit?

#25
by jackboot - opened

I have tried to load the model using bitsnbytes but unfortunately there is a type error on inference.

File "/home/supermicro/ai/openedai-vision/backend/minicpm.py", line 54, in chat_with_images answer = self.model.chat( File "/home/supermicro/.cache/huggingface/modules/transformers_modules/MiniCPM-Llama3-V-2_5/modeling_minicpmv.py", line 454, in chat res, vision_hidden_states = self.generate( File "/home/supermicro/.cache/huggingface/modules/transformers_modules/MiniCPM-Llama3-V-2_5/modeling_minicpmv.py", line 354, in generate ) = self.get_vllm_embedding(model_inputs) File "/home/supermicro/.cache/huggingface/modules/transformers_modules/MiniCPM-Llama3-V-2_5/modeling_minicpmv.py", line 100, in get_vllm_embedding vision_embedding = self.resampler(vision_embedding, tgt_sizes) File "/home/supermicro/miniconda3/envs/nvidia/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/supermicro/miniconda3/envs/nvidia/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/supermicro/miniconda3/envs/nvidia/lib/python3.10/site-packages/accelerate/hooks.py", line 166, in new_forward output = module._old_forward(*args, **kwargs) File "/home/supermicro/.cache/huggingface/modules/transformers_modules/MiniCPM-Llama3-V-2_5/resampler.py", line 150, in forward out = self.attn( File "/home/supermicro/miniconda3/envs/nvidia/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/supermicro/miniconda3/envs/nvidia/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/supermicro/miniconda3/envs/nvidia/lib/python3.10/site-packages/accelerate/hooks.py", line 166, in new_forward output = module._old_forward(*args, **kwargs) File "/home/supermicro/miniconda3/envs/nvidia/lib/python3.10/site-packages/torch/nn/modules/activation.py", line 1266, in forward attn_output, attn_output_weights = F.multi_head_attention_forward( File "/home/supermicro/miniconda3/envs/nvidia/lib/python3.10/site-packages/torch/nn/functional.py", line 5477, in multi_head_attention_forward attn_output = linear(attn_output, out_proj_weight, out_proj_bias) RuntimeError: self and mat2 must have the same dtype, but got BFloat16 and Char

Did anyone figure out why this could be? It loads fine in transformers and inferences. Not sure how the dtype ends up char.

heh, after I posted this I tried

llm_int8_skip_modules= ["resampler"]

Seems to be inferencing. So the resampler can't be quantized?

@jackboot Would you mind sharing your bitsnbytes config? Have a single T4 Nvidia and the only config it works on is when loading in 4-bits.

            'quantization_config': BitsAndBytesConfig(
                load_in_4bit=True,
                bnb_4bit_quant_type='nf4',
                bnb_4bit_use_double_quant=True, 
                bnb_4bit_compute_dtype=self.dtype,
                llm_int8_skip_modules= ["resampler"]

`

Sign up or log in to comment