tamdiep106 commited on
Commit
fc08875
1 Parent(s): 41f0ab0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -7
README.md CHANGED
@@ -12,16 +12,97 @@ language:
12
  - ja
13
  ---
14
 
 
 
15
  To use in code:
16
 
17
  ```python
18
- from transformers import LlamaForCausalLM, LlamaTokenizer
 
 
 
 
 
 
19
 
20
  model = LlamaForCausalLM.from_pretrained(
21
- "tamdiep106/alpaca_lora_ja_en_emb-7b",
22
- load_in_8bit=False,
23
- torch_dtype=torch.float16,
24
- device_map="auto"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  )
26
- tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
27
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  - ja
13
  ---
14
 
15
+ # Usage:
16
+
17
  To use in code:
18
 
19
  ```python
20
+ import torch
21
+ import peft
22
+ from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
23
+
24
+ tokenizer = LlamaTokenizer.from_pretrained(
25
+ "decapoda-research/llama-7b-hf"
26
+ )
27
 
28
  model = LlamaForCausalLM.from_pretrained(
29
+ "tamdiep106/alpaca_lora_ja_en_emb-7b",
30
+ load_in_8bit=False,
31
+ device_map="auto",
32
+ torch_dtype=torch.float16
33
+ )
34
+
35
+ tokenizer.pad_token_id = 0 # unk. we want this to be different from the eos token
36
+ tokenizer.bos_token_id = 1
37
+ tokenizer.eos_token_id = 2
38
+ ```
39
+ To try out this model, use this colab space
40
+ https://colab.research.google.com/drive/1kVcN0L_n5lwhFlIqDkNbLNURboifgbBO?usp=sharing
41
+
42
+ Japanese prompt:
43
+
44
+ ```python
45
+ Instruction_input_JP = 'あなたはアシスタントです。以下に、タスクを説明する指示と、さらなるコンテキストを提供する入力を組み合わせます。 リクエストを適切に完了するレスポンスを作成します。'
46
+ instruction_no_input_JP = 'あなたはアシスタントです。以下はタスクを説明する指示です。 リクエストを適切に完了するレスポンスを作成します。'
47
+
48
+ prompt = """{}
49
+ ### Instruction:
50
+ {}
51
+
52
+ ### Response:"""
53
+
54
+ if input=='':
55
+ prompt = prompt.format(
56
+ instruction_no_input_JP, instruction
57
+ )
58
+ else:
59
+ prompt = prompt.format("{}\n\n### input:\n{}""").format(
60
+ Instruction_input_JP, instruction, input
61
  )
62
+ ```
63
+
64
+ English prompt:
65
+
66
+ ```python
67
+ instruction_input_EN = 'You are an Assistant, below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.'
68
+ instruction_no_input_EN = 'You are an Assistant, below is an instruction that describes a task. Write a response that appropriately completes the request.'
69
+
70
+ prompt = """{}
71
+ ### Instruction:
72
+ {}
73
+
74
+ ### Response:"""
75
+
76
+ instruction = "4 + 4 = ?" #@param {type:"string"}
77
+ input = "" #@param {type:"string"}
78
+
79
+ if input=='':
80
+ prompt = prompt.format(
81
+ instruction_no_input_EN, instruction
82
+ )
83
+ else:
84
+ prompt = prompt.format("{}\n\n### input:\n{}""").format(
85
+ instruction_input_EN, instruction, input
86
+ )
87
+ ```
88
+
89
+ Use this code to decode output of model
90
+
91
+ ```python
92
+ for s in generation_output.sequences:
93
+ result = tokenizer.decode(s).strip()
94
+ result = result.replace(prompt, '')
95
+ result = result.replace("<s>", "")
96
+ result = result.replace("</s>", "")
97
+ if result=='':
98
+ print('No output')
99
+ print(prompt)
100
+ print(result)
101
+ continue
102
+ print('\nResponse: ')
103
+
104
+ print(result)
105
+
106
+ ```
107
+
108
+ # Training: