Deprecation and Configuration Issues with load_in_4bit and load_in_8bit

#2
by NeuralNovel - opened

The `load_in_4bit` and `load_in_8bit` arguments are deprecated and will be removed in the future versions. Please, pass a `BitsAndBytesConfig` object in `quantization_config` argument instead. Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>. /usr/local/lib/python3.10/dist-packages/transformers/quantizers/auto.py:174: UserWarning: You passed `quantization_config` or equivalent parameters to `from_pretrained` but the model you're loading already has a `quantization_config` attribute. The `quantization_config` from the model will be used. warnings.warn(warning_msg)

Great model, but getting this error when running the int4, is there a specific bitsandbytes version to use?
tried many things to fix but no cigar.

Sign up or log in to comment