bad output of model
Original Code:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import pipeline
model_id="/data2/Smaug-72B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto",
)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=1024,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15,
return_full_text=False,
)
prompt = """[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question,
please don't share false information.
<</SYS>>
写一段排序代码. [/INST]
Answer:
"""
print(pipe(prompt)[0]['generated_text'])
Response:
,<<, are, and, in, is a help, right, and, in, is a good, reliable, and, in, is a, well, trust, and, in, is a, traditional, respectable, and, in, is a, SYS, help, and, in, is a,],, SYS, and, in, is a, SYS, <<, and, in, is a,], <<, and, in, is a,], <<, and, in, is a, SYS, : and, in, is a,], <<, and, in, is a, head, <<, >>, and, in, is a, SYS, <<, and, in, is a, head, <<, and, in, is a,], SYS, and, in, is a,head, <<, and, in, is a, SYS, <<, and, in, is a, head, <<, <<
, SYS, SYS, and, in, is a,], <<, but, and, in, is a,], <<, and, in, is a,], <<, and, in, is a,], <<, and, in, is a,<<, <<, and, in, is a, head, <<, and, in, is a, head, <<, and, in, is a, <<, <<, and, in, is a, main, <<, and, in, is a, ][, <<, and, in, is a, head, <<, and, in, is a, head, :,::, and, in, is a, head, <<, and, in, is a, main,]:, <<, and, in, is a, head, with, should, should, and, in, is a, and, in, is a, <<, <<, and, in, is a, and, in, is a, master, <<, and, in, is a, and, in, isa, head, with, usual,], and a, and, in, isA, with, SYS, help, <<, and, in, is a, and, in, is a, SYS, with, in, is a, and, in, is a, and, in, is a, and, with, in, is a, and, in, is a, and, in, is a, and, with, in, is a, and, in, is a, and, in, is a, and, with, ], <<, and, in, is a, and, in, is a, head, with, in, is a, and, with, in, isa, and, in, is a, <<, with, in, in, is a, and, in, is a, <<, with, in, in, is a, and, in, is a, and, in, is a, head, with, in, in, is a, and, in, is a, and, In, and, in, is a, and, in, is a, <<, with, in, is, is a, and, in, is a, /, with, in, in, is a, and, in, is a, and, in, is a, and, in, is a, and, in, is a, and, in, is a, and, in, is a, <<, with, in, is a, and, in, is a, and, in, is a, and, in, is a, and, in, is a, and, in, is a, and, in, is a, and, with, in, is a, and, in, is a, and, in, is a, and, in, is a, and, with, in, is a, and, in, is a, and, in, is a, and, with, int, and, <<, and, in, is, is, and, and, in, is a, and, with, in, is, is a, and, in, is a, and, in, is a, and, in, is a, and, in, is a, and, in, is a, and, in, is a, and, with, in, is a, and, in, is a, and, in, is a, and, with, in, is, :, and, <<, and, and, is, <<, and, of, is, <<, and, of, is, <<, and, of, is, in, is a, and, in, is a, and, with, in, is, :, and, in, is a, and, with, in
same, llama2 prompt or qwen prompt, nothing works.
same bad output
Are you also using float16 instead of bfloat16?
same here when doing inference with vllm