update config.json vocab_size to tokenizer length
.i.e., 32000 as for high throughput in vllm there can be sampling of padded tokens which will result in error in vllm , it is an open issue here . https://github.com/vllm-project/vllm/issues/340
The "fix" gives this error: Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/workspace/vllm/entrypoints/openai/api_server.py", line 236, in
engine = AsyncLLMEngine.from_engine_args(engine_args)
File "/workspace/vllm/engine/async_llm_engine.py", line 628, in from_engine_args
engine = cls(parallel_config.worker_use_ray,
File "/workspace/vllm/engine/async_llm_engine.py", line 321, in init
self.engine = self._init_engine(*args, **kwargs)
File "/workspace/vllm/engine/async_llm_engine.py", line 369, in _init_engine
return engine_class(*args, **kwargs)
File "/workspace/vllm/engine/llm_engine.py", line 128, in init
self._init_workers()
File "/workspace/vllm/engine/llm_engine.py", line 181, in _init_workers
self._run_workers("load_model")
File "/workspace/vllm/engine/llm_engine.py", line 1041, in _run_workers
driver_worker_output = getattr(self.driver_worker,
File "/workspace/vllm/worker/worker.py", line 100, in load_model
self.model_runner.load_model()
File "/workspace/vllm/worker/model_runner.py", line 88, in load_model
self.model = get_model(self.model_config,
File "/workspace/vllm/model_executor/utils.py", line 52, in get_model
return get_model_fn(model_config, device_config, **kwargs)
File "/workspace/vllm/model_executor/model_loader.py", line 86, in get_model
model.load_weights(model_config.model, model_config.download_dir,
File "/workspace/vllm/model_executor/models/llama.py", line 391, in load_weights
weight_loader(param, loaded_weight)
File "/workspace/vllm/model_executor/layers/vocab_parallel_embedding.py", line 88, in weight_loader
assert loaded_weight.shape[parallel_dim] == self.org_vocab_size
AssertionError
my bad
Yes it is giving error seems like they are working on it , guys from vllm, hopefully it gets merged soon.
The model was trained with these extra 32 tokens, they seem to be related to the function calling from: https://github.com/NousResearch/Hermes-Function-Calling
It is done https://github.com/vllm-project/vllm/pull/3500 here , now it is working fine as of now . Just upgrade to latest vllm repo. Putting it here for anyone who might have the same difficulty .