/Users/cfruan/miniconda3/envs/mlc-chat-venv/bin/python -m mlc_llm gen_config /Users/Shared/models/Meta-Llama-3.1-70B-Instruct --quantization q0f16 --conv-template llama-3_1 --output local_dir/Llama-3.1-70B-Instruct-q0f16-MLC [2024-07-23 17:43:51] INFO auto_config.py:116: Found model configuration: /Users/Shared/models/Meta-Llama-3.1-70B-Instruct/config.json [2024-07-23 17:43:51] INFO auto_config.py:154: Found model type: llama. Use `--model-type` to override. [2024-07-23 17:43:51] INFO llama_model.py:62: context_window_size not found in config.json. Falling back to max_position_embeddings (131072) [2024-07-23 17:43:51] INFO llama_model.py:82: prefill_chunk_size defaults to 2048 [2024-07-23 17:43:51] INFO config.py:107: Overriding max_batch_size from 1 to 80 [2024-07-23 17:43:51] INFO gen_config.py:144: [generation_config.json] Setting bos_token_id: 128000 [2024-07-23 17:43:51] INFO gen_config.py:144: [generation_config.json] Setting eos_token_id: [128001, 128008, 128009] [2024-07-23 17:43:51] INFO gen_config.py:144: [generation_config.json] Setting temperature: 0.6 [2024-07-23 17:43:51] INFO gen_config.py:144: [generation_config.json] Setting top_p: 0.9 [2024-07-23 17:43:51] INFO gen_config.py:158: Not found tokenizer config: /Users/Shared/models/Meta-Llama-3.1-70B-Instruct/tokenizer.model [2024-07-23 17:43:51] INFO gen_config.py:156: Found tokenizer config: /Users/Shared/models/Meta-Llama-3.1-70B-Instruct/tokenizer.json. Copying to local_dir/Llama-3.1-70B-Instruct-q0f16-MLC/tokenizer.json [2024-07-23 17:43:51] INFO gen_config.py:158: Not found tokenizer config: /Users/Shared/models/Meta-Llama-3.1-70B-Instruct/vocab.json [2024-07-23 17:43:51] INFO gen_config.py:158: Not found tokenizer config: /Users/Shared/models/Meta-Llama-3.1-70B-Instruct/merges.txt [2024-07-23 17:43:51] INFO gen_config.py:158: Not found tokenizer config: /Users/Shared/models/Meta-Llama-3.1-70B-Instruct/added_tokens.json [2024-07-23 17:43:51] INFO gen_config.py:156: Found tokenizer config: /Users/Shared/models/Meta-Llama-3.1-70B-Instruct/tokenizer_config.json. Copying to local_dir/Llama-3.1-70B-Instruct-q0f16-MLC/tokenizer_config.json [2024-07-23 17:43:51] INFO gen_config.py:217: Detected tokenizer info: {'token_postproc_method': 'byte_level', 'prepend_space_in_encode': False, 'strip_space_in_decode': False} [2024-07-23 17:43:51] INFO gen_config.py:32: [System default] Setting pad_token_id: 0 [2024-07-23 17:43:51] INFO gen_config.py:32: [System default] Setting presence_penalty: 0.0 [2024-07-23 17:43:51] INFO gen_config.py:32: [System default] Setting frequency_penalty: 0.0 [2024-07-23 17:43:51] INFO gen_config.py:32: [System default] Setting repetition_penalty: 1.0 [2024-07-23 17:43:51] INFO gen_config.py:245: Dumping configuration file to: local_dir/Llama-3.1-70B-Instruct-q0f16-MLC/mlc-chat-config.json /Users/cfruan/miniconda3/envs/mlc-chat-venv/bin/python -m mlc_llm convert_weight /Users/Shared/models/Meta-Llama-3.1-70B-Instruct --quantization q0f16 --output local_dir/Llama-3.1-70B-Instruct-q0f16-MLC [2024-07-23 17:43:52] INFO auto_config.py:116: Found model configuration: /Users/Shared/models/Meta-Llama-3.1-70B-Instruct/config.json [2024-07-23 17:43:52] INFO auto_device.py:88: Not found device: cuda:0 [2024-07-23 17:43:53] INFO auto_device.py:88: Not found device: rocm:0 [2024-07-23 17:43:54] INFO auto_device.py:79: Found device: metal:0 [2024-07-23 17:43:55] INFO auto_device.py:88: Not found device: vulkan:0 [2024-07-23 17:43:55] INFO auto_device.py:88: Not found device: opencl:0 [2024-07-23 17:43:55] INFO auto_device.py:35: Using device: metal:0 [2024-07-23 17:43:55] INFO auto_weight.py:71: Finding weights in: /Users/Shared/models/Meta-Llama-3.1-70B-Instruct [2024-07-23 17:43:55] INFO auto_weight.py:137: Not found Huggingface PyTorch [2024-07-23 17:43:55] INFO auto_weight.py:144: Found source weight format: huggingface-safetensor. Source configuration: /Users/Shared/models/Meta-Llama-3.1-70B-Instruct/model.safetensors.index.json [2024-07-23 17:43:55] INFO auto_weight.py:107: Using source weight configuration: /Users/Shared/models/Meta-Llama-3.1-70B-Instruct/model.safetensors.index.json. Use `--source` to override. [2024-07-23 17:43:55] INFO auto_weight.py:111: Using source weight format: huggingface-safetensor. Use `--source-format` to override. [2024-07-23 17:43:55] INFO auto_config.py:154: Found model type: llama. Use `--model-type` to override. [2024-07-23 17:43:55] INFO llama_model.py:62: context_window_size not found in config.json. Falling back to max_position_embeddings (131072) [2024-07-23 17:43:55] INFO llama_model.py:82: prefill_chunk_size defaults to 2048 Weight conversion with arguments: --config /Users/Shared/models/Meta-Llama-3.1-70B-Instruct/config.json --quantization NoQuantize(name='q0f16', kind='no-quant', model_dtype='float16') --model-type llama --device metal:0 --source /Users/Shared/models/Meta-Llama-3.1-70B-Instruct/model.safetensors.index.json --source-format huggingface-safetensor --output local_dir/Llama-3.1-70B-Instruct-q0f16-MLC Start storing to cache local_dir/Llama-3.1-70B-Instruct-q0f16-MLC 0%| | 0/483 [00:00