Error converting to fp16: b'INFO:hf-to-gguf:Loading model: qwen2.5-3b

#135
by nanowell - opened

Error converting to fp16: b'INFO:hf-to-gguf:Loading model: qwen2.5-3b\nINFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only\nINFO:hf-to-gguf:Exporting model...\nINFO:hf-to-gguf:gguf: loading model weight map from 'model.safetensors.index.json'\nINFO:hf-to-gguf:gguf: loading model part 'model-00001-of-00002.safetensors'\nINFO:hf-to-gguf:token_embd.weight, torch.bfloat16 --> F16, shape = {2048, 151936}\nINFO:hf-to-gguf:blk.0.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.0.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.0.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.0.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.0.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.0.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.0.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.0.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.0.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.0.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.0.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.0.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.1.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.1.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.1.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.1.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.1.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.1.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.1.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.1.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.1.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.1.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.1.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.1.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.10.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.10.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.10.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.10.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.10.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.10.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.10.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.10.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.10.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.10.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.10.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.10.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.11.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.11.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.11.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.11.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.11.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.11.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.11.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.11.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.11.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.11.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.11.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.11.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.12.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.12.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.12.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.12.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.12.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.12.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.12.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.12.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.12.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.12.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.12.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.12.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.13.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.13.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.13.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.13.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.13.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.13.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.13.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.13.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.13.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.13.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.13.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.13.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.14.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.14.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.14.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.14.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.14.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.14.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.14.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.14.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.14.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.14.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.14.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.14.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.15.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.15.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.15.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.15.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.15.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.15.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.15.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.15.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.15.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.15.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.15.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.15.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.16.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.16.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.16.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.16.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.16.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.16.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.16.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.16.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.16.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.16.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.16.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.16.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.17.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.17.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.17.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.17.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.17.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.17.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.17.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.17.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.17.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.17.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.17.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.17.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.18.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.18.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.18.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.18.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.18.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.18.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.18.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.18.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.18.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.18.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.18.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.18.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.19.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.19.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.19.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.19.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.19.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.19.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.19.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.19.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.19.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.19.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.19.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.19.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.2.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.2.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.2.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.2.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.2.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.2.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.2.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.2.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.2.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.2.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.2.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.2.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.20.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.20.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.20.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.20.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.20.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.20.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.20.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.20.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.20.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.20.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.20.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.20.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.21.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.21.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.21.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.21.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.21.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.21.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.21.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.21.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.21.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.21.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.21.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.21.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.22.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.22.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.22.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.22.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.22.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.22.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.22.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.22.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.22.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.22.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.22.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.22.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.23.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.23.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.23.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.23.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.23.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.23.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.23.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.23.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.23.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.23.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.23.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.23.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.24.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.24.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.24.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.24.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.24.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.24.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.24.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.24.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.24.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.24.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.24.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.24.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.25.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.25.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.25.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.25.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.25.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.25.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.25.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.25.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.25.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.25.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.25.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.25.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.26.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.26.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.26.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.26.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.26.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.26.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.26.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.26.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.26.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.26.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.26.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.26.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.27.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.27.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.27.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.27.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.27.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.27.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.27.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.27.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.27.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.27.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.27.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.27.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.28.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.28.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.28.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.28.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.28.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.28.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.28.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.3.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.3.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.3.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.3.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.3.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.3.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.3.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.3.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.3.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.3.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.3.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.3.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.4.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.4.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.4.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.4.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.4.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.4.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.4.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.4.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.4.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.4.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.4.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.4.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.5.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.5.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.5.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.5.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.5.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.5.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.5.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.5.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.5.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.5.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.5.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.5.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.6.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.6.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.6.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.6.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.6.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.6.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.6.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.6.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.6.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.6.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.6.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.6.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.7.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.7.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.7.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.7.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.7.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.7.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.7.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.7.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.7.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.7.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.7.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.7.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.8.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.8.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.8.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.8.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.8.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.8.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.8.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.8.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.8.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.8.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.8.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.8.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.9.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.9.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.9.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.9.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.9.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.9.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.9.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.9.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.9.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.9.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.9.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.9.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:gguf: loading model part 'model-00002-of-00002.safetensors'\nINFO:hf-to-gguf:blk.28.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.28.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.28.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.28.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.28.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.29.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.29.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.29.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.29.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.29.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.29.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.29.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.29.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.29.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.29.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.29.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.29.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.30.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.30.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.30.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.30.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.30.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.30.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.30.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.30.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.30.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.30.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.30.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.30.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.31.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.31.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.31.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.31.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.31.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.31.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.31.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.31.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.31.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.31.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.31.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.31.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.32.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.32.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.32.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.32.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.32.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.32.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.32.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.32.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.32.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.32.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.32.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.32.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.33.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.33.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.33.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.33.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.33.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.33.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.33.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.33.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.33.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.33.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.33.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.33.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.34.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.34.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.34.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.34.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.34.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.34.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.34.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.34.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.34.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.34.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.34.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.34.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.35.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.35.ffn_down.weight, torch.bfloat16 --> F16, shape = {11008, 2048}\nINFO:hf-to-gguf:blk.35.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.35.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 11008}\nINFO:hf-to-gguf:blk.35.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.35.attn_k.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.35.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:blk.35.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.35.attn_q.bias, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.35.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.35.attn_v.bias, torch.bfloat16 --> F32, shape = {256}\nINFO:hf-to-gguf:blk.35.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 256}\nINFO:hf-to-gguf:output_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nTraceback (most recent call last):\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 4437, in \n main()\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 4431, in main\n model_instance.write()\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 435, in write\n self.prepare_metadata(vocab_only=False)\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 388, in prepare_metadata\n self.metadata = gguf.Metadata.load(self.metadata_override, self.dir_model_card, self.model_name, total_params)\n File "/home/user/app/llama.cpp/gguf-py/gguf/metadata.py", line 55, in load\n model_card = Metadata.load_model_card(model_path)\n File "/home/user/app/llama.cpp/gguf-py/gguf/metadata.py", line 127, in load_model_card\n data = yaml.safe_load(raw)\n File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/yaml/init.py", line 125, in safe_load\n return load(stream, SafeLoader)\n File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/yaml/init.py", line 81, in load\n return loader.get_single_data()\n File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/yaml/constructor.py", line 49, in get_single_data\n node = self.get_single_node()\n File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/yaml/composer.py", line 41, in get_single_node\n raise ComposerError("expected a single document in the stream",\nyaml.composer.ComposerError: expected a single document in the stream\n in "", line 1, column 1:\n base_model:\n ^\nbut found another document\n in "", line 5, column 1:\n ---\n ^\n'

ggml.ai org

Can you give the exact link to the HF model?

I tried https://huggingface.co/Qwen/Qwen2.5-3B-Instruct and it works without any problem.

You may have an invalid README.md in your repo

nanowell changed discussion status to closed

Sign up or log in to comment