Model outputs gibberish after assistant response
#26
by
komninos
- opened
I'm prompting the model using ChatML on text-generation-webui
. Honestly getting pretty good results but the model starts outputting gibberish after the assistant response. Instead of the gibberish, I expected to get an <|imend|>
token.
The gibberish keeps getting generated until it hits the max_new_tokens
token limit. Not sure how to deal with this.
Hi, I am not familiar with text-generation-webui
, but let me try.
- The default for
skip_special_tokens
is True, which means you may never encounter special tokens in the UI even if they are generated. It appears to me that one could set this to False in the UI. If it is possible and you can see<|im_end|>
being generated, it most likely is a configuration issue that the stopping criteria are not properly configured. - The stopping criteria in this package seem to be configured only through stop_strings, which can be set using the UI with the
custom_stopping_strings
. (There are some default ones here). This should include<|im_end|>
,<|im_start|>
, and<|endoftext|>
withskip_special_tokens
set to False.
Please let me know if this helps. Thanks.
已经在微信群里贴给官方人员看到过,官方可能不会专门适配text-generation-webui,要不去text-generation-webui提个issue?
komninos
changed discussion status to
closed