Phi-3 models must be re-converted after recent llama.cpp commit

#4
by andysalerno - opened

FYI: https://github.com/ollama/ollama/issues/5956

can confirm the models no longer run on llama.cpp (only tested phi-3-medium-128k-instruct-GGUF but I assume the same holds true for all)

Sign up or log in to comment