Upload the tokenizer file on all quants.
Browse filesThe tokenizer is incompatible with anything that makes the model continue an existing generation done by the model itself. It was adding the mistral EOS token even when changing instruct templates.
All I had to do is deleting some stuff referring to the EOS token " < / s> " on the tokenizer file, just below where it says:
"post_processor": {
"type": "TemplateProcessing",
It was making it not work with twinbook, oobabooga's notebook, the "Start reply with" option or with the continue button.
- tokenizer.json +0 -0
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|