today's version of llama.cpp results in an error
error loading model: unknown model architecture: 'phi2'
llama_load_model_from_file: failed to load model
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
2023-12-18 16:57:06 ERROR:Failed to load the model.
Traceback (most recent call last):
File "/data/text-generation-webui/modules/ui_model_menu.py", line 209, in load_model_wrapper
shared.model, shared.tokenizer = load_model(selected_model, loader)
File "/data/text-generation-webui/modules/models.py", line 89, in load_model
output = load_func_maploader
File "/data/text-generation-webui/modules/models.py", line 259, in llamacpp_loader
model, tokenizer = LlamaCppModel.from_pretrained(model_file)
File "/data/text-generation-webui/modules/llamacpp_model.py", line 91, in from_pretrained
result.model = Llama(**params)
File "/data/llama-cpp-python/llama_cpp/llama.py", line 963, in init
self._n_vocab = self.n_vocab()
File "/data/llama-cpp-python/llama_cpp/llama.py", line 2270, in n_vocab
return self._model.n_vocab()
File "/data/llama-cpp-python/llama_cpp/llama.py", line 252, in n_vocab
assert self.model is not None
AssertionError
I had the same issue using LM studio 0.29
{
"cause": {
"cause": "unknown model architecture: 'phi2'",
"title": "Failed to load model",
"errorData": {
"n_ctx": 2048,
"n_batch": 512,
"n_gpu_layers": 1
}
},
Downloaded and installed the Beta V8 version (https://lmstudio.ai/beta-releases.html) and the problem was solved.
I knew there would be a problem. Just had to :D Every time there is a good model it either doesnt work properly or just doesnt work. At least we got the quantized ver so in the future the llm model apps will fix it.
I think the newest version of llama-cpp works for phi-2 now. The community was hard at work with making adjustments for it's slight differences during the month. (https://github.com/ggerganov/llama.cpp/issues/4437) Just likely need to pip update or remake your build for the latest.
Can't speak for LM studio. I'm not a user of it. Sorry. Though it's good to see that something can load phi-2 gguf quants such as LM Studio.
Can confirm that it worked with upgrading on MAC: CMAKE_ARGS="-DLLAMA_METAL=on" pip install -U llama-cpp-python
@mox
Same with my side. I've successfully used phi with the llama-cpp family shorty after my 1st reply 12 days ago. As well as several times since. Feels like a old hat now. Been pretty happy with it.
@LaferriereJC
I've not checked, but has LM Studio got it working yet for phi? Likely has. It not, that's a bummer. And really sorry. The wait can be painful sometimes.
Has anyone made it work with CTransformers as i am getting a error for model_type = 'phi-msft'
@NikeshK ctransformers is pretty outdated now. Use something like llama cpp python which is maintained if you want to use llama cpp with python
Has anyone made it work with CTransformers as i am getting a error for model_type = 'phi-msft'
ollama is a good alternative as well. Especially if you wanted to keep api use simple like ctransformers was trying to do. ollama has llama.cpp as the core. And it works on all common consumer os's (linux, mac and windows). As well as common consumer computing acceleration hardware (nvidia, amd, apple MPS, ect.).
Here is the links: