Update README.md
Browse files
README.md
CHANGED
@@ -1,201 +1,86 @@
|
|
1 |
---
|
2 |
-
library_name:
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
-
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
## Model Details
|
13 |
-
|
14 |
### Model Description
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
-
|
20 |
-
- **Developed by:** [More Information Needed]
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
-
|
28 |
-
### Model Sources [optional]
|
29 |
-
|
30 |
-
<!-- Provide the basic links for the model. -->
|
31 |
-
|
32 |
-
- **Repository:** [More Information Needed]
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
-
|
36 |
-
## Uses
|
37 |
-
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
-
### Direct Use
|
41 |
-
|
42 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
-
|
46 |
-
### Downstream Use [optional]
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
-
|
52 |
-
### Out-of-Scope Use
|
53 |
-
|
54 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
-
|
58 |
-
## Bias, Risks, and Limitations
|
59 |
-
|
60 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
-
|
64 |
-
### Recommendations
|
65 |
-
|
66 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
-
|
70 |
-
## How to Get Started with the Model
|
71 |
-
|
72 |
-
Use the code below to get started with the model.
|
73 |
-
|
74 |
-
[More Information Needed]
|
75 |
|
76 |
## Training Details
|
77 |
-
|
78 |
### Training Data
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
200 |
-
|
201 |
-
|
|
|
1 |
---
|
2 |
+
library_name: peft
|
3 |
+
base_model: google/gemma-1.1-7b-it
|
4 |
+
language:
|
5 |
+
- ko
|
6 |
+
- en
|
7 |
+
tags:
|
8 |
+
- translation
|
9 |
+
- gemma
|
10 |
---
|
11 |
|
12 |
# Model Card for Model ID
|
|
|
|
|
|
|
|
|
|
|
13 |
## Model Details
|
|
|
14 |
### Model Description
|
15 |
+
- **Developed by:** [Kang Seok Ju]
|
16 |
+
- **Contact:** [brildev7@gmail.com]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
## Training Details
|
|
|
19 |
### Training Data
|
20 |
+
https://huggingface.co/datasets/traintogpb/aihub-koen-translation-integrated-tiny-100k
|
21 |
+
|
22 |
+
# Inference Examples
|
23 |
+
```
|
24 |
+
import os
|
25 |
+
import torch
|
26 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
|
27 |
+
from peft import PeftModel
|
28 |
+
|
29 |
+
model_id = "google/gemma-1.1-7b-it"
|
30 |
+
peft_model_id = "brildev7/gemma-1.1-7b-it-translation-ko-sft-qlora"
|
31 |
+
quantization_config = BitsAndBytesConfig(
|
32 |
+
load_in_4bit=True,
|
33 |
+
bnb_4bit_compute_dtype=torch.float16,
|
34 |
+
bnb_4bit_quant_type="nf4"
|
35 |
+
)
|
36 |
+
|
37 |
+
model = AutoModelForCausalLM.from_pretrained(
|
38 |
+
model_id,
|
39 |
+
quantization_config=quantization_config,
|
40 |
+
torch_dtype=torch.float16,
|
41 |
+
low_cpu_mem_usage=True,
|
42 |
+
attn_implementation="flash_attention_2",
|
43 |
+
)
|
44 |
+
model = PeftModel.from_pretrained(model, peft_model_id)
|
45 |
+
|
46 |
+
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
|
47 |
+
tokenizer.pad_token_id = tokenizer.eos_token_id
|
48 |
+
|
49 |
+
# example
|
50 |
+
prompt_template = """Translate the following into English:
|
51 |
+
{}
|
52 |
+
|
53 |
+
output:
|
54 |
+
"""
|
55 |
+
passage = "๋ฌ์ด ํด๋ฅผ ์์ ํ ๊ฐ๋ฆฌ๋ '๊ฐ๊ธฐ์ผ์'์ด ๋ถ๋ฏธ ๋๋ฅ์์ 7๋
๋ง์ ๊ด์ธก๋๋ฉด์ ์ ์ธ๊ณ ์์ต๋ช
์ ๊ด์ฌ์ด ์ง์ค๋๋ค. ๋ฉ์์ฝ์์ ์์ํด ์บ๋๋ค๊น์ง ๋ถ๋ฏธ๋ฅผ ๊ฐ๋ก์ง๋ฅด๋ฉฐ ๋ํ๋ '์ฐ์ฃผ์ผ'๋ฅผ ๋ณด๊ธฐ ์ํด ์ฌ๋๋ค์ ํ๋ ์ผ์ ๋ฉ์ถ๊ณ ํ๋์ ์ฌ๋ ค๋ค๋ดค๋ค. ๊ฐ๊ธฐ์ผ์์ผ๋ก ์ฐฝ์ถ๋ ๊ฒฝ์ ํจ๊ณผ๋ ์์กฐ์์ ์ด๋ฅธ๋ค๋ ๋ถ์์ด ๋์จ๋ค."
|
56 |
+
prompt = prompt_template.format(passage)
|
57 |
+
|
58 |
+
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
59 |
+
outputs = model.generate(**inputs,
|
60 |
+
max_new_tokens=1024,
|
61 |
+
temperature=0.2,
|
62 |
+
top_p=0.95,
|
63 |
+
do_sample=True,
|
64 |
+
use_cache=False)
|
65 |
+
print(tokenizer.decode(outputs[0]))
|
66 |
+
- 7 years after the last solar eclipse, when the moon completely covered the sun was observed in North America, tens of millions of people around the world focused their attention. People stopped what they were doing and looked up to watch the 'cosmic show' that appeared across North America, from Mexico to Canada. An analysis showed that the economic effect created by the lunar eclipse was also in the hundreds of billions of won.
|
67 |
+
|
68 |
+
# example
|
69 |
+
prompt_template = """Translate the following into English:
|
70 |
+
{}
|
71 |
+
|
72 |
+
output:
|
73 |
+
"""
|
74 |
+
passage = "์ดํ์งธ ํฉ์ฌ ํ์์ด ์ด์ด์ง๋ฉฐ ์์ผ๊ฐ ํ๋ฆฐ ํ๋ฃจ์์ต๋๋ค. ์ค๋๋ ์์ธ ๋์ฌ์ ํฉ์ฌ์ ๊ฐํ ์ข
์ผ ๋ฟ์๊ณ ๋๋ฐ๋น๊น์ง ๋ ์์ต๋๋ค. ๋ด์ผ๋ ๋๊ธฐ ์ค์ ํฉ์ฌ๊ฐ ๋จ์ ๋ฏธ์ธ๋จผ์ง ๋๋๊ฐ ๋๊ฒ ๋ํ๋๊ฒ ์ต๋๋ค."
|
75 |
+
prompt = prompt_template.format(passage)
|
76 |
+
|
77 |
+
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
78 |
+
outputs = model.generate(**inputs,
|
79 |
+
max_new_tokens=1024,
|
80 |
+
temperature=1,
|
81 |
+
top_p=0.95,
|
82 |
+
do_sample=True,
|
83 |
+
use_cache=False)
|
84 |
+
print(tokenizer.decode(outputs[0]))
|
85 |
+
- On the second day of the yellow dust, the day was misty with the continuous phenomenon. On this day, downtown Seoul was covered with yellow dust and covered with yellow dust throughout the day. Yellow dust remained from tomorrow, so the fine dust concentration would be high.
|
86 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|