fix: update chat_template to ChatML
Browse filesI've tested locally with this chat_template and it worked fine.
I've also checked other models that use ChatML(e.g: https://huggingface.co/cognitivecomputations/dolphin-2.9.4-llama3.1-8b) and it also uses same prompt template.
I'm using vllm for inference, not SillyTavern, so I'm not sure how it will affect that.
This is the output that this jinja produces:
<|im_start|>system
This is the system prompt<|im_end|>
<|im_start|>user
This is user message<|im_end|>
<|im_start|>assistant
This is assistant message<|im_end|>
- tokenizer_config.json +1 -1
tokenizer_config.json
CHANGED
@@ -37,7 +37,7 @@
|
|
37 |
}
|
38 |
},
|
39 |
"bos_token": "<|startoftext|>",
|
40 |
-
"chat_template": "{% if
|
41 |
"clean_up_tokenization_spaces": false,
|
42 |
"eos_token": "<|im_end|>",
|
43 |
"legacy": true,
|
|
|
37 |
}
|
38 |
},
|
39 |
"bos_token": "<|startoftext|>",
|
40 |
+
"chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
|
41 |
"clean_up_tokenization_spaces": false,
|
42 |
"eos_token": "<|im_end|>",
|
43 |
"legacy": true,
|