(Fixed) Missing end Token?
It might just be GPT4All v2.7.3, but after it finishes a response the CPU continues at 100% and it never stops (using Q5_K_M).
thanks for pointing out. Looking into this
From github:
The fine-tuned models were trained for dialogue applications. To get the expected features and performance for them, a specific formatting defined in ChatFormat needs to be followed: The prompt begins with a <|begin_of_text|> special token, after which one or more messages follow. Each message starts with the <|start_header_id|> tag, the role system, user or assistant, and the <|end_header_id|> tag. After a double newline \n\n the contents of the message follow. The end of each message is marked by the <|eot_id|> token.
So, i use like:./main -m ~/models/Meta-Llama-3-8B-Instruct.Q8_0.gguf --color -n -2 -e -s 0 -p '<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant.<|eot_id|>\n<|start_header_id|>user<|end_header_id|>\n\nHi!<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|>\n\n' -ngl 99 --mirostat 2 -c 8192 -r '<|eot_id|>' --in-prefix '\n<|start_header_id|>user<|end_header_id|>\n\n' --in-suffix '<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n' -i
Everything works perfectly.
@munish0838 I think alignment activation is causing it (edit: upon further testing it still freezes after non-alignment prompts, just not as often).
I tested your foundational non-Instruct Q5_K_M with the same GPT4All v2.7.3 app and prompts and it stops after completing a response.
Then I re-tested the Instruct and it stops as it should after non-alignment prompts. But when alignment it triggered during by my alignment prompts (e.g. make a list of cuss words) it gives an I can't do that response and never ends. So somehow activating alignment is apparently triggering this issue.
Not sure if this will help, but Im currently using LMStudio and I noticed it kept going on and on. I set the Preset to ChatML and its working 100% fine. Its able to write snake in 1 go!
Doesn't look like properly converted - in Ollama it goes into neverending mode, looks like some tokenizer issues during conversion. Btw. how did you convert it? Latest llama.cpp (b2694) cannot do that. Using official L3 template from https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/tokenizer_config.json#L2053.
Having the same issue with GPT4All.
I think this might be causing the model to not stop
https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/discussions/4
I had the same issue with LM Studio. Then I downloaded https://huggingface.co/NousResearch/Meta-Llama-3-8B-GGUF and hit a similar issue again. After changing the preset to "Chat ML", it became better.
Re-uploading with updated end token
Thanks @munish0838 , I tested the Q4_K_M when I noticed the time stamp changed and the end token issue was resolved. Closing the issue so I don't forget later.
Thanks for confirming Phil. I’ll upload the 70Bs with the same change in a few hours
@0-hero Awesome! I was hoping you would, because the other 70b I downloaded had the same issue.
@Phil337 All the 70B quants - QuantFactory/Meta-Llama-3-70B-Instruct-GGUF