fix vocab size
Browse files```python
from transformers import AutoTokenizer
testtokenizer=AutoTokenizer.from_pretrained("LeoLM/leo-mistral-hessianai-7b-chat")
len(testtokenizer)
# 32002
```
Leads to e.g. VLLM error:
`TypeError: argument 'tokens': 'NoneType' object cannot be converted to 'PyString'`
(see [here](https://github.com/vllm-project/vllm/issues/516#issuecomment-1657507293 ))
- config.json +1 -1
config.json
CHANGED
@@ -21,5 +21,5 @@
|
|
21 |
"torch_dtype": "float16",
|
22 |
"transformers_version": "4.34.0",
|
23 |
"use_cache": true,
|
24 |
-
"vocab_size":
|
25 |
}
|
|
|
21 |
"torch_dtype": "float16",
|
22 |
"transformers_version": "4.34.0",
|
23 |
"use_cache": true,
|
24 |
+
"vocab_size": 32002
|
25 |
}
|