Modl showing up as Llama instead of phi3 in LMstudio

#1
by Tirin - opened

image.png
Maybe its a user error on my part. but I wanted to just see if thats how it should be structured. Since when using the system prompt it does not like Phi3 or Llama.

xtuner org

Hi @Tirin
We manually converted the phi-3 weights to llama, for ease of model conversion.
The specific conversion process can be seen here

Additionally, I'd like to know if this conversion will affect the deployment in LM Studio. Is there a way to manually set chat template?

@LZHgrla Yes there is a way to configure chat templates in LM Studio. 😁

xtuner org

@LZHgrla Yes there is configurable chat templates in LM Studio. 😁

Great!!

@LZHgrla Yes there is configurable chat templates in LM Studio. 😁

Great!!

Does this model use Phi-3 or Llava chat templates?

Additionally, I'd like to know if this conversion will affect the deployment in LM Studio. Is there a way to manually set chat template?

Also in LM Studio due to the naming of the gguf files the model shows up confusingly in the model selection dropdown

image.png

I usually name mine something like "llava-phi-3-mini-Q4_K_M.gguf"

xtuner org

Additionally, I'd like to know if this conversion will affect the deployment in LM Studio. Is there a way to manually set chat template?

Also in LM Studio due to the naming of the gguf files the model shows up confusingly in the model selection dropdown

image.png

I usually name mine something like "llava-phi-3-mini-Q4_K_M.gguf"

@saishf
Hi! Thanks for your advice.

I have modified the file names, and can you help me check if it's suitable?

https://huggingface.co/xtuner/llava-phi-3-mini-gguf/tree/main

xtuner org

@LZHgrla Yes there is configurable chat templates in LM Studio. 😁

Great!!

Does this model use Phi-3 or Llava chat templates?

Please use Phi-3 chat template

@saishf
Hi! Thanks for your advice.

I have modified the file names, and can you help me check if it's suitable?

https://huggingface.co/xtuner/llava-phi-3-mini-gguf/tree/main

image.png
Looks good!

xtuner org

Thanks! @saishf

@LZHgrla Yes there is configurable chat templates in LM Studio. 😁

Great!!

Does this model use Phi-3 or Llava chat templates?

Please use Phi-3 chat template

Phi 3 template stopped the end token popping up, Thanks 😸

Came to say thanks. Had that issue, too.
I still have to load the model every time I want it to analyze an new image. If not it will talk about fantastic abstract art and pixels, and weirdly about wine bottles a lot of times (there are no bottles in my images). Works after I reload the model. Is that expected behavior? Or is it part of the config and I can correct it somehow?

I've not verified issues about it yet but there are "quirks" inside of the llama.cpp engine which look for strings in the model name, one of those strings would be "phi3".
That changes pretokenization to be phi3 compatible.
I'd assume using llama as model name will cause tokenization errors (handling of newlines, stripping before special tokens, etc)

Sign up or log in to comment