模型表现不理想,并且出现显存过高的情况

#1
by lippepetain - opened

这是我的代码:
from transformers import LlamaTokenizer, LlamaForCausalLM
import torch

tokenizer = LlamaTokenizer.from_pretrained("/home/zwfeng4/PLL/LLM")
model = LlamaForCausalLM.from_pretrained("/home/zwfeng4/PLL/LLM").half().to("cuda")

instruction = "介绍一下你自己"
batch = tokenizer(instruction, return_tensors="pt", add_special_tokens=False).to("cuda")
with torch.no_grad():
output = model.generate(**batch, max_new_tokens=1024, temperature=0.7, do_sample=True)
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
请问是哪里出问题了吗? 模型表现出只会重复输入的语句,我用的是V100-32G显存

Sign up or log in to comment