Error no file named pytorch_model.bin
4090 so I don't get the memory issue.. what am I missing here?
INFO:Loading guanaco-33B-GPTQ...
WARNING:Auto-assiging --gpu-memory 23 for your GPU to try to prevent out-of-memory errors. You can manually set other values.
Traceback (most recent call last):
File "C:\Projects\AI\text-generation-webui\server.py", line 1087, in
shared.model, shared.tokenizer = load_model(shared.model_name)
File "C:\Projects\AI\text-generation-webui\modules\models.py", line 95, in load_model
output = load_func(model_name)
File "C:\Projects\AI\text-generation-webui\modules\models.py", line 223, in huggingface_loader
model = LoaderClass.from_pretrained(checkpoint, **params)
File "C:\Projects\AI\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py", line 471, in from_pretrained
return model_class.from_pretrained(
File "C:\Projects\AI\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 2405, in from_pretrained
raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory models\guanaco-33B-GPTQ.
You need to set the GPTQ parameters - please see the instructions in the README
I tried that.. it doesn't like it the error below ...the other model 'Guanaco-33B-4bit' works fine so will use that.. not sure on the performance difference?
Traceback (most recent call last): File “C:\Projects\AI\text-generation-webui\server.py”, line 71, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name) File “C:\Projects\AI\text-generation-webui\modules\models.py”, line 95, in load_model output = load_func(model_name) File “C:\Projects\AI\text-generation-webui\modules\models.py”, line 289, in GPTQ_loader model = modules.GPTQ_loader.load_quantized(model_name) File “C:\Projects\AI\text-generation-webui\modules\GPTQ_loader.py”, line 177, in load_quantized model = load_quant(str(path_to_model), str(pt_path), shared.args.wbits, shared.args.groupsize, kernel_switch_threshold=threshold) File “C:\Projects\AI\text-generation-webui\modules\GPTQ_loader.py”, line 84, in _load_quant model.load_state_dict(safe_load(checkpoint), strict=False) File “C:\Projects\AI\installer_files\env\lib\site-packages\safetensors\torch.py”, line 261, in load_file result[k] = f.get_tensor(k) RuntimeError: [enforce fail at C:\cb\pytorch_1000000000000\work\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 425984000 bytes.
never mind.. spoke to soon.. restarted it and it loaded fine