Error while loading llama-2

#3
by zuhashaik - opened

I initially loaded the GGML version by mistake instead of GGUF and discovered that LLama.cpp doesn't support GGML. I then converted it to GGUF using the LLama.cpp repository, but I'm still encountering the same error.

modelq2gguf='/media/iiit/Karvalo/zuhair/llama/llama70b_q2/llama-2-70b.gguf.q2_K.bin'
llm = LlamaCpp(
model_path=modelq2gguf,
temperature=0.75,
max_tokens=2000,
top_p=1,
callback_manager=callback_manager,
verbose=True, # Verbose is required to pass to the callback manager
)

ValidationError: 1 validation error for LlamaCpp
root
Could not load Llama model from path: /media/iiit/Karvalo/zuhair/llama/llama70b_q2/llama-2-70b.gguf.q2_K.bin. Received error (type=value_error)

zuhashaik changed discussion status to closed

Sign up or log in to comment