Hongbin37 commited on
Commit
745660c
1 Parent(s): 27f632a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -4,17 +4,20 @@ language:
4
  - zh
5
  ---
6
  A instruction-tuned LoRA model of https://huggingface.co/baichuan-inc/baichuan-7B
 
7
  Training framework: https://github.com/hiyouga/LLaMA-Factory
 
8
  Please follow the baichuan-7B License to use this model.
9
 
10
  Usage:
 
11
  from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
12
 
13
  tokenizer = AutoTokenizer.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True)
14
  model = AutoModelForCausalLM.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True).cuda()
15
  streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
16
 
17
- query = "晚上睡不着怎么办"
18
  template = (
19
  "你是一名经验丰富的心理咨询师,专长于认知行为疗法, 以心理咨询师的身份回答以下问题。\n"
20
  "Human: {}\nAssistant: "
@@ -23,3 +26,4 @@ template = (
23
  inputs = tokenizer([template.format(query)], return_tensors="pt")
24
  inputs = inputs.to("cuda")
25
  generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
 
 
4
  - zh
5
  ---
6
  A instruction-tuned LoRA model of https://huggingface.co/baichuan-inc/baichuan-7B
7
+
8
  Training framework: https://github.com/hiyouga/LLaMA-Factory
9
+
10
  Please follow the baichuan-7B License to use this model.
11
 
12
  Usage:
13
+ ```python
14
  from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
15
 
16
  tokenizer = AutoTokenizer.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True)
17
  model = AutoModelForCausalLM.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True).cuda()
18
  streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
19
 
20
+ query = "为什么生怕一点点事情做不好被人批评?做事情,别人告诉了我方法,但身体不会按方法来,非要折腾几遍,才发现别人告诉的方法和口诀,是最高效的;而且最近一个月,睡眠不好,整个白天都是无精打采的,每天活的很丧,知道自己的问题出在哪里,不晓得怎么去做出改变"
21
  template = (
22
  "你是一名经验丰富的心理咨询师,专长于认知行为疗法, 以心理咨询师的身份回答以下问题。\n"
23
  "Human: {}\nAssistant: "
 
26
  inputs = tokenizer([template.format(query)], return_tensors="pt")
27
  inputs = inputs.to("cuda")
28
  generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
29
+ ```