need help :(

#106
by yywon - opened

I updated few times and keep showing me this thing. I used gemma-2b-it and write below code.

BASE_MODEL = "gg-hf/gemma-2b-it"

model = AutoModelForCausalLM.from_pretrained(BASE_MODEL, device_map="auto", quantization_config=bnb_config)
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL, add_special_tokens=True)
tokenizer.padding_side = 'right'

ImportError: Using bitsandbytes 8-bit quantization requires Accelerate: pip install accelerate and the latest version of bitsandbytes: pip install -i https://pypi.org/simple/ bitsandbytes

Google org

Can you please install the below commands and then re-start the notebook and run.
!pip install accelerate
!pip install -i https://pypi.org/simple/ bitsandbytes

     accelerate.__version__ = 0.33.0
     bitsandbytes.__version__ = 0.43.2

Then executed below code

Testing Code.png

If you still facing an issue, please let us know.

Sign up or log in to comment