Upload the tokenizer file on all quants.

#2

The tokenizer is incompatible with anything that makes the model continue an existing generation done by the model itself. It was adding the mistral EOS token even when changing instruct templates.

All I had to do is deleting some stuff referring to the EOS token " < / s> " on the tokenizer file, just below where it says:

"post_processor": {

"type": "TemplateProcessing",

It was making it not work with twinbook, oobabooga's notebook, the "Start reply with" option or with the continue button.

turboderp changed pull request status to merged

Looks good. Thanks

Sign up or log in to comment