Create added_tokens.json
Browse filesCopied from alpaca-13B and native, solves an error message when converting to ggml files
Exception: Vocab size mismatch (model has 32001, but models/chavinlo_gpt4-x-alpaca/tokenizer.model has 32000)
- added_tokens.json +3 -0
added_tokens.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"[PAD]": 32000
|
3 |
+
}
|