"TheBloke/Llama-2-7b-Chat-GGUF does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack."
#11
by
swvajanyatek
- opened
What am I doing wrong here?
from transformers import AutoModelForCausalLM, AutoTokenizer
model_main = "TheBloke/Llama-2-7b-Chat-GGUF"
model = AutoModelForCausalLM.from_pretrained(model_main, hf=True)
You are using transformers which does not support gguf. I think you are trying to use ctransformers so instead of importing auto models from transformers import it from ctransformers