|
/home/floriadmin/miniforge3/envs/mlc/bin/python -m mlc_llm gen_config ../dist/models/ToolLLaMA-2-7b-v2 --quantization q4f32_1 --conv-template llama-2 --output /tmp/tmpxjsa38do --tensor-parallel-shards 2 |
|
[2024-03-18 21:03:53] INFO auto_config.py:115: [92mFound[0m model configuration: ../dist/models/ToolLLaMA-2-7b-v2/config.json |
|
[2024-03-18 21:03:53] INFO auto_config.py:153: [92mFound[0m model type: [1mllama[0m. Use `--model-type` to override. |
|
[2024-03-18 21:03:53] INFO llama_model.py:52: [1mcontext_window_size[0m not found in config.json. Falling back to [1mmax_position_embeddings[0m (4096) |
|
[2024-03-18 21:03:53] INFO llama_model.py:72: [1mprefill_chunk_size[0m defaults to [1mcontext_window_size[0m (4096) |
|
[2024-03-18 21:03:53] INFO config.py:106: Overriding [1mmax_batch_size[0m from 1 to 80 |
|
[2024-03-18 21:03:53] INFO config.py:106: Overriding [1mtensor_parallel_shards[0m from 1 to 2 |
|
[2024-03-18 21:03:53] INFO gen_config.py:133: [generation_config.json] Setting [1mbos_token_id[0m: 1 |
|
[2024-03-18 21:03:53] INFO gen_config.py:133: [generation_config.json] Setting [1meos_token_id[0m: 2 |
|
[2024-03-18 21:03:53] INFO gen_config.py:145: [92mFound[0m tokenizer config: ../dist/models/ToolLLaMA-2-7b-v2/tokenizer.model. Copying to [1m/tmp/tmpxjsa38do/tokenizer.model[0m |
|
[2024-03-18 21:03:53] INFO gen_config.py:147: [91mNot found[0m tokenizer config: ../dist/models/ToolLLaMA-2-7b-v2/tokenizer.json |
|
[2024-03-18 21:03:53] INFO gen_config.py:147: [91mNot found[0m tokenizer config: ../dist/models/ToolLLaMA-2-7b-v2/vocab.json |
|
[2024-03-18 21:03:53] INFO gen_config.py:147: [91mNot found[0m tokenizer config: ../dist/models/ToolLLaMA-2-7b-v2/merges.txt |
|
[2024-03-18 21:03:53] INFO gen_config.py:147: [91mNot found[0m tokenizer config: ../dist/models/ToolLLaMA-2-7b-v2/added_tokens.json |
|
[2024-03-18 21:03:53] INFO gen_config.py:145: [92mFound[0m tokenizer config: ../dist/models/ToolLLaMA-2-7b-v2/tokenizer_config.json. Copying to [1m/tmp/tmpxjsa38do/tokenizer_config.json[0m |
|
[2024-03-18 21:03:53] INFO gen_config.py:153: The model has `tokenizer.model` but not `tokenizer.json`. It is always recommended to prefer JSON instead. Attempting to convert using HuggingFace transformers library |
|
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 |
|
[2024-03-18 21:03:54] INFO gen_config.py:167: Succesfully converted `tokenizer.model` to: /tmp/tmpxjsa38do/tokenizer.json |
|
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting [1mpad_token_id[0m: 0 |
|
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting [1mtemperature[0m: 0.7 |
|
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting [1mpresence_penalty[0m: 0.0 |
|
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting [1mfrequency_penalty[0m: 0.0 |
|
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting [1mrepetition_penalty[0m: 1.0 |
|
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting [1mtop_p[0m: 0.95 |
|
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting [1mmean_gen_len[0m: 128 |
|
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting [1mmax_gen_len[0m: 512 |
|
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting [1mshift_fill_factor[0m: 0.3 |
|
[2024-03-18 21:03:54] INFO gen_config.py:198: Dumping configuration file to: [1m/tmp/tmpxjsa38do/mlc-chat-config.json[0m |
|
/home/floriadmin/miniforge3/envs/mlc/bin/python -m mlc_llm convert_weight ../dist/models/ToolLLaMA-2-7b-v2 --quantization q4f32_1 --source-format auto --output /tmp/tmpxjsa38do |
|
[2024-03-18 21:03:55] INFO auto_config.py:115: [92mFound[0m model configuration: ../dist/models/ToolLLaMA-2-7b-v2/config.json |
|
[2024-03-18 21:03:56] INFO auto_device.py:76: [92mFound[0m device: cuda:0 |
|
[2024-03-18 21:03:56] INFO auto_device.py:76: [92mFound[0m device: cuda:1 |
|
[2024-03-18 21:03:56] INFO auto_device.py:76: [92mFound[0m device: cuda:2 |
|
[2024-03-18 21:03:56] INFO auto_device.py:76: [92mFound[0m device: cuda:3 |
|
[2024-03-18 21:03:56] INFO auto_device.py:76: [92mFound[0m device: cuda:4 |
|
[2024-03-18 21:03:56] INFO auto_device.py:76: [92mFound[0m device: cuda:5 |
|
[2024-03-18 21:03:56] INFO auto_device.py:76: [92mFound[0m device: cuda:6 |
|
[2024-03-18 21:03:56] INFO auto_device.py:76: [92mFound[0m device: cuda:7 |
|
[2024-03-18 21:03:56] INFO auto_device.py:76: [92mFound[0m device: cuda:8 |
|
[2024-03-18 21:03:56] INFO auto_device.py:76: [92mFound[0m device: cuda:9 |
|
[2024-03-18 21:03:57] INFO auto_device.py:85: [91mNot found[0m device: rocm:0 |
|
[2024-03-18 21:03:58] INFO auto_device.py:85: [91mNot found[0m device: metal:0 |
|
[2024-03-18 21:04:02] INFO auto_device.py:76: [92mFound[0m device: vulkan:0 |
|
[2024-03-18 21:04:02] INFO auto_device.py:76: [92mFound[0m device: vulkan:1 |
|
[2024-03-18 21:04:02] INFO auto_device.py:76: [92mFound[0m device: vulkan:2 |
|
[2024-03-18 21:04:02] INFO auto_device.py:76: [92mFound[0m device: vulkan:3 |
|
[2024-03-18 21:04:02] INFO auto_device.py:76: [92mFound[0m device: vulkan:4 |
|
[2024-03-18 21:04:02] INFO auto_device.py:76: [92mFound[0m device: vulkan:5 |
|
[2024-03-18 21:04:02] INFO auto_device.py:76: [92mFound[0m device: vulkan:6 |
|
[2024-03-18 21:04:02] INFO auto_device.py:76: [92mFound[0m device: vulkan:7 |
|
[2024-03-18 21:04:02] INFO auto_device.py:76: [92mFound[0m device: vulkan:8 |
|
[2024-03-18 21:04:02] INFO auto_device.py:76: [92mFound[0m device: vulkan:9 |
|
[2024-03-18 21:04:02] INFO auto_device.py:76: [92mFound[0m device: vulkan:10 |
|
[2024-03-18 21:04:03] INFO auto_device.py:85: [91mNot found[0m device: opencl:0 |
|
[2024-03-18 21:04:03] INFO auto_device.py:33: Using device: [1mcuda:0[0m |
|
[2024-03-18 21:04:03] INFO auto_weight.py:70: Finding weights in: ../dist/models/ToolLLaMA-2-7b-v2 |
|
[2024-03-18 21:04:03] INFO auto_weight.py:120: [92mFound[0m source weight format: huggingface-torch. Source configuration: ../dist/models/ToolLLaMA-2-7b-v2/pytorch_model.bin.index.json |
|
[2024-03-18 21:04:03] INFO auto_weight.py:167: [91mNot found[0m Huggingface Safetensor |
|
[2024-03-18 21:04:03] INFO auto_weight.py:106: Using source weight configuration: [1m../dist/models/ToolLLaMA-2-7b-v2/pytorch_model.bin.index.json[0m. Use `--source` to override. |
|
[2024-03-18 21:04:03] INFO auto_weight.py:110: Using source weight format: [1mhuggingface-torch[0m. Use `--source-format` to override. |
|
[2024-03-18 21:04:03] INFO auto_config.py:153: [92mFound[0m model type: [1mllama[0m. Use `--model-type` to override. |
|
[2024-03-18 21:04:03] INFO llama_model.py:52: [1mcontext_window_size[0m not found in config.json. Falling back to [1mmax_position_embeddings[0m (4096) |
|
[2024-03-18 21:04:03] INFO llama_model.py:72: [1mprefill_chunk_size[0m defaults to [1mcontext_window_size[0m (4096) |
|
[1mWeight conversion with arguments:[0m |
|
[1m--config[0m ../dist/models/ToolLLaMA-2-7b-v2/config.json |
|
[1m--quantization[0m GroupQuantize(name='q4f32_1', kind='group-quant', group_size=40, quantize_dtype='int4', storage_dtype='uint32', model_dtype='float32', linear_weight_layout='NK', quantize_embedding=True, quantize_final_fc=True, num_elem_per_storage=8, num_storage_per_group=5, max_int_value=7) |
|
[1m--model-type[0m llama |
|
[1m--device[0m cuda:0 |
|
[1m--source[0m ../dist/models/ToolLLaMA-2-7b-v2/pytorch_model.bin.index.json |
|
[1m--source-format[0m huggingface-torch |
|
[1m--output[0m /tmp/tmpxjsa38do |
|
Start storing to cache /tmp/tmpxjsa38do |
|
0%| | 0/195 [00:00<?, ?it/s]
[2024-03-18 21:04:05] INFO huggingface_loader.py:182: Loading HF parameters from: ../dist/models/ToolLLaMA-2-7b-v2/pytorch_model-00003-of-00003.bin |
|
0%| | 0/195 [00:00<?, ?it/s]
0%| | 0/195 [00:00<?, ?it/s] |
|
Traceback (most recent call last): |
|
File "<frozen runpy>", line 198, in _run_module_as_main |
|
File "<frozen runpy>", line 88, in _run_code |
|
File "/home/floriadmin/mlc-llm/python/mlc_llm/__main__.py", line 47, in <module> |
|
main() |
|
File "/home/floriadmin/mlc-llm/python/mlc_llm/__main__.py", line 28, in main |
|
cli.main(sys.argv[2:]) |
|
File "/home/floriadmin/mlc-llm/python/mlc_llm/cli/convert_weight.py", line 87, in main |
|
convert_weight( |
|
File "/home/floriadmin/mlc-llm/python/mlc_llm/interface/convert_weight.py", line 182, in convert_weight |
|
_convert_args(args) |
|
File "/home/floriadmin/mlc-llm/python/mlc_llm/interface/convert_weight.py", line 146, in _convert_args |
|
tvmjs.dump_ndarray_cache( |
|
File "/home/floriadmin/miniforge3/envs/mlc/lib/python3.11/site-packages/tvm/contrib/tvmjs.py", line 210, in dump_ndarray_cache |
|
for k, origin_v in param_generator: |
|
File "/home/floriadmin/mlc-llm/python/mlc_llm/interface/convert_weight.py", line 130, in _param_generator |
|
for name, param in loader.load(device=args.device, preshard_funcs=preshard_funcs): |
|
File "/home/floriadmin/mlc-llm/python/mlc_llm/loader/huggingface_loader.py", line 117, in load |
|
param = self._load_mlc_param(mlc_name, device=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/home/floriadmin/mlc-llm/python/mlc_llm/loader/huggingface_loader.py", line 147, in _load_mlc_param |
|
self._load_file(path) |
|
File "/home/floriadmin/mlc-llm/python/mlc_llm/loader/huggingface_loader.py", line 186, in _load_file |
|
for name, param in load_func(path): |
|
File "/home/floriadmin/mlc-llm/python/mlc_llm/loader/utils.py", line 42, in load_torch_shard |
|
for name, param in torch.load(path, map_location=torch.device("cpu")).items(): |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/home/floriadmin/miniforge3/envs/mlc/lib/python3.11/site-packages/torch/serialization.py", line 998, in load |
|
with _open_file_like(f, 'rb') as opened_file: |
|
^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/home/floriadmin/miniforge3/envs/mlc/lib/python3.11/site-packages/torch/serialization.py", line 445, in _open_file_like |
|
return _open_file(name_or_buffer, mode) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/home/floriadmin/miniforge3/envs/mlc/lib/python3.11/site-packages/torch/serialization.py", line 426, in __init__ |
|
super().__init__(open(name, mode)) |
|
^^^^^^^^^^^^^^^^ |
|
FileNotFoundError: [Errno 2] No such file or directory: '../dist/models/ToolLLaMA-2-7b-v2/pytorch_model-00003-of-00003.bin' |
|
|