--- base_model: - ifable/gemma-2-Ifable-9B --- Created with: ``` from transformers import AutoModelForCausalLM import torch # 加载模型并解除共享 model = AutoModelForCausalLM.from_pretrained("ifable/gemma-2-Ifable-9B", tie_word_embeddings=False) # 解除共享的 lm_head 和 embed_tokens 权重 model.lm_head.weight.data = model.model.embed_tokens.weight.data.clone() # 将模型转换为 bf16 格式 model = model.to(dtype=torch.bfloat16) # 指定保存路径 untied_model_dir = "mergekit/output" # 保存解除共享且为 bf16 格式的模型 model.save_pretrained(untied_model_dir) model.config.save_pretrained(untied_model_dir) ``` I didn't copy tokenizer from the original model, do it yourself if you want.