Instruction mode output garbage in text-generation-webui
#14
by
raincandy-u
- opened
raincandy-u
changed discussion status to
closed
What framework are you using? Have you tried the chat-instruct Mode?
raincandy-u
changed discussion status to
open
The chat-instruct prompt:
Continue the chat dialogue below. Write a single reply for the character "<|character|>".
<|prompt|>
The instruction prompt (in the config.json):
{{ '<s>' }}{% for message in messages %}{{'<|' + message['role'] + '|>' + '
' + message['content'] + '<|end|>
' }}{% endfor %}{% if add_generation_prompt %}{{ '<|assistant|>
' }}{% else %}{{ '<|endoftext|>' }}{% endif %}
It seems text-generation-webui got thrown off by the else in the instruction template. Removing it seems to generate the correct prompt.
The resulting template:
{{ '<s>' }}{% for message in messages %}{{'<|' + message['role'] + '|>' + '
' + message['content'] + '<|end|>
' }}{% endfor %}{% if add_generation_prompt %}{{ '<|assistant|>
' }}{% endif %}
You are correct
@theo77186
, the current chat_template
in the config.json
is an attempt to have a template that works for both pre-training and fine-tuning.
When add_generation_prompt
is missing, it tries to add an eos_token
which finishes the generation and might resolve into unexpected results.
Solved!! 😊
raincandy-u
changed discussion status to
closed