runtime error

The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. ā†‘ Those bitsandbytes warnings are expected on ZeroGPU ā†‘ Traceback (most recent call last): File "/home/user/app/app.py", line 18, in <module> tokenizer = AutoTokenizer.from_pretrained(model_id) File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 880, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2110, in from_pretrained return cls._from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2336, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 156, in __init__ super().__init__( File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 105, in __init__ raise ValueError( ValueError: Cannot instantiate this tokenizer from a slow version. If it's based on sentencepiece, make sure you have sentencepiece installed.

Container logs:

Fetching error logs...