/home/ruihang/Workspace/miniconda3/envs/python311/bin/python -m mlc_llm gen_config /models/Llama-2-7b-chat-hf --quantization q0f16 --conv-template llama-2 --output /tmp/tmp8ew5r2yr [2024-05-22 00:30:05] INFO auto_config.py:115: Found model configuration: /models/Llama-2-7b-chat-hf/config.json [2024-05-22 00:30:05] INFO auto_config.py:153: Found model type: llama. Use `--model-type` to override. [2024-05-22 00:30:05] INFO llama_model.py:52: context_window_size not found in config.json. Falling back to max_position_embeddings (4096) [2024-05-22 00:30:05] INFO llama_model.py:72: prefill_chunk_size defaults to 2048 [2024-05-22 00:30:05] INFO config.py:106: Overriding max_batch_size from 1 to 80 [2024-05-22 00:30:05] INFO gen_config.py:255: [generation_config.json] Setting bos_token_id: 1 [2024-05-22 00:30:05] INFO gen_config.py:255: [generation_config.json] Setting eos_token_id: 2 [2024-05-22 00:30:05] INFO gen_config.py:255: [generation_config.json] Setting pad_token_id: 0 [2024-05-22 00:30:05] INFO gen_config.py:255: [generation_config.json] Setting temperature: 0.6 [2024-05-22 00:30:05] INFO gen_config.py:255: [generation_config.json] Setting top_p: 0.9 [2024-05-22 00:30:05] INFO gen_config.py:267: Found tokenizer config: /models/Llama-2-7b-chat-hf/tokenizer.model. Copying to /tmp/tmp8ew5r2yr/tokenizer.model [2024-05-22 00:30:05] INFO gen_config.py:267: Found tokenizer config: /models/Llama-2-7b-chat-hf/tokenizer.json. Copying to /tmp/tmp8ew5r2yr/tokenizer.json [2024-05-22 00:30:05] INFO gen_config.py:269: Not found tokenizer config: /models/Llama-2-7b-chat-hf/vocab.json [2024-05-22 00:30:05] INFO gen_config.py:269: Not found tokenizer config: /models/Llama-2-7b-chat-hf/merges.txt [2024-05-22 00:30:05] INFO gen_config.py:269: Not found tokenizer config: /models/Llama-2-7b-chat-hf/added_tokens.json [2024-05-22 00:30:05] INFO gen_config.py:267: Found tokenizer config: /models/Llama-2-7b-chat-hf/tokenizer_config.json. Copying to /tmp/tmp8ew5r2yr/tokenizer_config.json [2024-05-22 00:30:05] INFO gen_config.py:80: [System default] Setting presence_penalty: 0.0 [2024-05-22 00:30:05] INFO gen_config.py:80: [System default] Setting frequency_penalty: 0.0 [2024-05-22 00:30:05] INFO gen_config.py:80: [System default] Setting repetition_penalty: 1.0 [2024-05-22 00:30:05] INFO gen_config.py:80: [System default] Setting mean_gen_len: 128 [2024-05-22 00:30:05] INFO gen_config.py:80: [System default] Setting max_gen_len: 512 [2024-05-22 00:30:05] INFO gen_config.py:80: [System default] Setting shift_fill_factor: 0.3 [2024-05-22 00:30:05] INFO gen_config.py:335: Dumping configuration file to: /tmp/tmp8ew5r2yr/mlc-chat-config.json /home/ruihang/Workspace/miniconda3/envs/python311/bin/python -m mlc_llm convert_weight /models/Llama-2-7b-chat-hf --quantization q0f16 --source-format auto --output /tmp/tmp8ew5r2yr [2024-05-22 00:30:06] INFO auto_config.py:115: Found model configuration: /models/Llama-2-7b-chat-hf/config.json [2024-05-22 00:30:06] INFO auto_device.py:79: Found device: cuda:0 [2024-05-22 00:30:06] INFO auto_device.py:79: Found device: cuda:1 [2024-05-22 00:30:07] INFO auto_device.py:88: Not found device: rocm:0 [2024-05-22 00:30:08] INFO auto_device.py:88: Not found device: metal:0 [2024-05-22 00:30:09] INFO auto_device.py:79: Found device: vulkan:0 [2024-05-22 00:30:09] INFO auto_device.py:79: Found device: vulkan:1 [2024-05-22 00:30:09] INFO auto_device.py:79: Found device: vulkan:2 [2024-05-22 00:30:10] INFO auto_device.py:79: Found device: opencl:0 [2024-05-22 00:30:10] INFO auto_device.py:79: Found device: opencl:1 [2024-05-22 00:30:10] INFO auto_device.py:35: Using device: cuda:0 [2024-05-22 00:30:10] INFO auto_weight.py:70: Finding weights in: /models/Llama-2-7b-chat-hf [2024-05-22 00:30:10] INFO auto_weight.py:120: Found source weight format: huggingface-torch. Source configuration: /models/Llama-2-7b-chat-hf/pytorch_model.bin.index.json [2024-05-22 00:30:10] INFO auto_weight.py:143: Found source weight format: huggingface-safetensor. Source configuration: /models/Llama-2-7b-chat-hf/model.safetensors.index.json [2024-05-22 00:30:10] INFO auto_weight.py:106: Using source weight configuration: /models/Llama-2-7b-chat-hf/pytorch_model.bin.index.json. Use `--source` to override. [2024-05-22 00:30:10] INFO auto_weight.py:110: Using source weight format: huggingface-torch. Use `--source-format` to override. [2024-05-22 00:30:10] INFO auto_config.py:153: Found model type: llama. Use `--model-type` to override. [2024-05-22 00:30:10] INFO llama_model.py:52: context_window_size not found in config.json. Falling back to max_position_embeddings (4096) [2024-05-22 00:30:10] INFO llama_model.py:72: prefill_chunk_size defaults to 2048 Weight conversion with arguments: --config /models/Llama-2-7b-chat-hf/config.json --quantization NoQuantize(name='q0f16', kind='no-quant', model_dtype='float16') --model-type llama --device cuda:0 --source /models/Llama-2-7b-chat-hf/pytorch_model.bin.index.json --source-format huggingface-torch --output /tmp/tmp8ew5r2yr Start storing to cache /tmp/tmp8ew5r2yr 0%| | 0/195 [00:00