Cannot load model

#1
by MarinaraSpaghetti - opened

Howdy!
Just wanted to let you know that I cannot load this model in my Oobabooga. I get the error below.
OSError: models\bartowski_internlm2-chat-20b-llama-exl2_6_5 does not appear to have a file named tokenization_internlm.py. Checkout 'https://huggingface.co/models\bartowski_internlm2-chat-20b-llama-exl2_6_5/None' for available files.

Anyone else experiencing this issue? Thank you in advance for your help!

Odd, just redownloaded it and it works fine for me. Using exllamav2 HF loader?

Hey, thanks for getting back to me! Yup! Funnily enough, I have these issues with all of your exl2 models, however, usually changing the number in the name of the folder to 6.5/4.0, etc. always fixed the issue, but not this time, sadly.

Are you on latest text gen webui? And sorry the _ breaks the model?! I literally switched to the _ to fix someone else's issue :') I hate filesystems.

Are you in windows or Linux? What hardware?

Yup, checked, and I'm using the latest version of webUI. And I have no clue what's up with the file names, I hate the issues with those too, ha ha. But LoneStriker uses "." for their names, and their quants work for me every time. I have the "trust remote code" flag enabled too. I'm using Windows and have 24GB VRAM on my GTX 3090. Below is the full error I receive. I downloaded and checked your other quants of this model and this happens on other versions too. :(

Traceback (most recent call last):

File "F:\text-generation-webui-main\modules\ui_model_menu.py", line 213, in load_model_wrapper

shared.model, shared.tokenizer = load_model(selected_model, loader)

                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "F:\text-generation-webui-main\modules\models.py", line 95, in load_model

tokenizer = load_tokenizer(model_name, model)

        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "F:\text-generation-webui-main\modules\models.py", line 119, in load_tokenizer

tokenizer = AutoTokenizer.from_pretrained(

        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "F:\text-generation-webui-main\installer_files\env\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 770, in from_pretrained

tokenizer_class = get_class_from_dynamic_module(class_ref, pretrained_model_name_or_path, **kwargs)

              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "F:\text-generation-webui-main\installer_files\env\Lib\site-packages\transformers\dynamic_module_utils.py", line 488, in get_class_from_dynamic_module

final_module = get_cached_module_file(

           ^^^^^^^^^^^^^^^^^^^^^^^

File "F:\text-generation-webui-main\installer_files\env\Lib\site-packages\transformers\dynamic_module_utils.py", line 294, in get_cached_module_file

resolved_module_file = cached_file(

                   ^^^^^^^^^^^^

File "F:\text-generation-webui-main\installer_files\env\Lib\site-packages\transformers\utils\hub.py", line 360, in cached_file

raise EnvironmentError(

OSError: models\bartowski_internlm2-chat-20b-llama-exl2_6_5 does not appear to have a file named tokenization_internlm.py. Checkout 'https://huggingface.co/models\bartowski_internlm2-chat-20b-llama-exl2_6_5/None' for available files.

Uncheck "trust-remote-code" in ExLlamav2_HF config - 6.5 quantized model works for me great!

Thank you so much, this fixed the issue! Sorry for trouble, I guess I've been messing with Yi models a bit too much recently, ha ha.

MarinaraSpaghetti changed discussion status to closed

Sign up or log in to comment