which inference framework is recommaned for use phi-3.5

#13
by rockcat-miao - opened

tried to use huggingface transformers acordding by model card, but looks like very slow, and CPU untilize is 100%...

I've been successfully using the quantized versions (some users have uploaded various quantized versions to HF). The GGUFs run pretty efficiently. Have you tried any of them?

Did not use GGUF yet, will try this later.

If you're using CPU, GGUF with llama.cpp / pip install llama-cpp-python will be your best bet if you want to run anything useful.
If you want something more user friendly:

  • gpt4all, which implements llama.cpp on its backend.
  • ollama, which, from what I can recall, also implements llama.cpp on its backend.
  • If you need to test some things on the GPU, kaggle and colab will give you plenty of total hours to play with; the primary difference will be the basic package - less hdd/more ram and more hdd/less ram, respectively (go to TPU, and i think it'll give you 100GB of ram, but you'll have to learn how to make use of the TPU's)

EDIT
You can also use the Azure AI Toolkit (personally, I feel it's a bit slow), which uses ONNX rather than GGUF - for more info, watch this YouTube video.

appreciate you information.

rockcat-miao changed discussion status to closed

Sign up or log in to comment