File size: 1,498 Bytes
2360cde
 
cc164e5
 
 
2360cde
cc164e5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: apache-2.0
language:
- zh
pipeline_tag: text-generation
---

How to use:
------

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "BoyangZ/Llama3-chinese_chat_ft"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a LLM assistant. Users will ask you questions in Chinese, You will answer questions in Chinese"},
    {"role": "user", "content": "李白是哪个朝代的人?"},
]

input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = model.generate(
    input_ids,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))

example1
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/644a78de7c5c68c7762886eb/uvOKN0WPumRVwE_kPkFKj.png)

example2
---

![image/png](https://cdn-uploads.huggingface.co/production/uploads/644a78de7c5c68c7762886eb/FoExkJHBp-yM6-XFwaDpG.png)

example3
---


![image/png](https://cdn-uploads.huggingface.co/production/uploads/644a78de7c5c68c7762886eb/1EorUSsh-28LZFZpp768k.png)