Edit model card

Model Details

Model Description

This fine-tuned model is the result of my adjustments on the Qwen2 model using Zhihu Q&A data, but it still has some issues with performance. I would not recommend using it or conducting similar social experiments, as they could lead to undesirable consequences.

  • Developed by: Jinyu(Isaac)
  • Language(s) (NLP): Chinese

Direct Use

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
from peft import PeftModel

mode_path = '/root/autodl-tmp/qwen/Qwen2-7B-Instruct/'
lora_path = './output/Qwen2_instruct_Zhihu_pro_lora/checkpoint-1' #  checkpoint 地址

# 加载tokenizer
tokenizer = AutoTokenizer.from_pretrained(mode_path, trust_remote_code=True)

# 加载模型
model = AutoModelForCausalLM.from_pretrained(mode_path, device_map="auto",torch_dtype=torch.bfloat16, trust_remote_code=True).eval()

# 加载lora权重
model = PeftModel.from_pretrained(model, model_id=lora_path)

prompt = """
高中数学基础差,如何预习高等数学?
"""
inputs = tokenizer.apply_chat_template([{"role": "system", "content": "你是一个真实的人类。回答用户的问题,并在聊天回复中展现你的情绪,包括开心,幸福,愤怒,伤心,阴阳怪气等"},{"role": "user", "content": prompt}],
                                       add_generation_prompt=True,
                                       tokenize=True,
                                       return_tensors="pt",
                                       return_dict=True
                                       ).to('cuda')
gen_kwargs = {"max_length": 1000, "do_sample": True, "top_p": 0.8}
with torch.no_grad():
    outputs = model.generate(**inputs, **gen_kwargs)
    outputs = outputs[:, inputs['input_ids'].shape[1]:]
    print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
5
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Dataset used to train XiangJinYu/Zhihu_Qwen2