PathFinderKR commited on
Commit
539bf0b
β€’
1 Parent(s): 0b10cf1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +173 -97
README.md CHANGED
@@ -2,108 +2,198 @@
2
  language:
3
  - ko
4
  - en
5
- license: mit
6
  library_name: transformers
7
  datasets:
8
  - MarkrAI/KoCommercial-Dataset
9
  ---
10
 
11
- # Model Card for Model ID
12
-
13
- <!-- Provide a quick summary of what the model is/does. -->
14
-
15
-
16
 
17
  ## Model Details
18
 
19
- ### Model Description
 
 
 
20
 
21
- <!-- Provide a longer summary of what this model is. -->
 
 
 
 
22
 
23
- This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated.
24
 
25
- - **Developed by:** [More Information Needed]
26
- - **Funded by [optional]:** [More Information Needed]
27
- - **Shared by [optional]:** [More Information Needed]
28
- - **Model type:** [More Information Needed]
29
- - **Language(s) (NLP):** [More Information Needed]
30
- - **License:** [More Information Needed]
31
- - **Finetuned from model [optional]:** [More Information Needed]
32
 
33
- ### Model Sources [optional]
34
 
35
- <!-- Provide the basic links for the model. -->
36
-
37
- - **Repository:** [More Information Needed]
38
- - **Paper [optional]:** [More Information Needed]
39
- - **Demo [optional]:** [More Information Needed]
40
 
41
  ## Uses
42
 
43
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
44
-
45
  ### Direct Use
46
 
47
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
48
-
49
- [More Information Needed]
50
-
51
- ### Downstream Use [optional]
52
-
53
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
54
-
55
- [More Information Needed]
56
 
57
  ### Out-of-Scope Use
58
 
59
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
60
-
61
- [More Information Needed]
62
 
63
  ## Bias, Risks, and Limitations
64
 
65
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
66
-
67
- [More Information Needed]
68
-
69
- ### Recommendations
70
 
71
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
72
 
73
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
74
 
75
  ## How to Get Started with the Model
76
 
77
- Use the code below to get started with the model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
 
79
- [More Information Needed]
80
 
81
  ## Training Details
82
 
83
  ### Training Data
84
 
85
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
86
-
87
- [More Information Needed]
88
 
89
  ### Training Procedure
90
 
91
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
92
-
93
- #### Preprocessing [optional]
94
-
95
- [More Information Needed]
96
-
97
 
98
  #### Training Hyperparameters
99
 
100
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
101
 
102
- #### Speeds, Sizes, Times [optional]
103
 
104
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
105
-
106
- [More Information Needed]
107
 
108
  ## Evaluation
109
 
@@ -137,68 +227,54 @@ Use the code below to get started with the model.
137
 
138
 
139
 
140
- ## Model Examination [optional]
141
-
142
- <!-- Relevant interpretability work for the model goes here -->
143
-
144
- [More Information Needed]
145
-
146
- ## Environmental Impact
147
-
148
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
149
-
150
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
151
-
152
- - **Hardware Type:** [More Information Needed]
153
- - **Hours used:** [More Information Needed]
154
- - **Cloud Provider:** [More Information Needed]
155
- - **Compute Region:** [More Information Needed]
156
- - **Carbon Emitted:** [More Information Needed]
157
-
158
- ## Technical Specifications [optional]
159
-
160
- ### Model Architecture and Objective
161
-
162
- [More Information Needed]
163
 
164
  ### Compute Infrastructure
165
 
166
- [More Information Needed]
167
-
168
  #### Hardware
169
 
170
- [More Information Needed]
171
 
172
  #### Software
173
 
174
- [More Information Needed]
 
175
 
176
- ## Citation [optional]
177
 
178
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
 
 
179
 
180
- **BibTeX:**
181
 
182
- [More Information Needed]
183
 
184
- **APA:**
185
 
186
- [More Information Needed]
187
 
188
- ## Glossary [optional]
 
 
189
 
190
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
191
 
192
- [More Information Needed]
 
 
 
 
 
 
 
193
 
194
- ## More Information [optional]
195
 
196
- [More Information Needed]
197
 
198
  ## Model Card Authors [optional]
199
 
200
  [More Information Needed]
201
 
 
 
202
  ## Model Card Contact
203
 
204
  [More Information Needed]
 
2
  language:
3
  - ko
4
  - en
5
+ license: llama3
6
  library_name: transformers
7
  datasets:
8
  - MarkrAI/KoCommercial-Dataset
9
  ---
10
 
11
+ # Waktaverse-Llama-3-KO-8B-Instruct Model Card
 
 
 
 
12
 
13
  ## Model Details
14
 
15
+ ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/65d6e0640ff5bc0c9b69ddab/Va78DaYtPJU6xr4F6Ca4M.webp)
16
+ Waktaverse-Llama-3-KO-8B-Instruct is a state-of-the-art Korean language model developed by Waktaverse AI team.
17
+ This large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks.
18
+ It is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses.
19
 
20
+ - **Developed by:** Waktaverse AI
21
+ - **Model type:** Large Language Model
22
+ - **Language(s) (NLP):** Korean, English
23
+ - **License:** [Llama3](https://llama.meta.com/llama3/license)
24
+ - **Finetuned from model:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
25
 
26
+ ## Model Sources
27
 
28
+ - **Repository:** [GitHub](https://github.com/PathFinderKR/Waktaverse-LLM/tree/main)
29
+ - **Paper :** [More Information Needed]
 
 
 
 
 
30
 
 
31
 
 
 
 
 
 
32
 
33
  ## Uses
34
 
 
 
35
  ### Direct Use
36
 
37
+ The model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.
 
 
 
 
 
 
 
 
38
 
39
  ### Out-of-Scope Use
40
 
41
+ This model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making.
42
+ Moreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.
 
43
 
44
  ## Bias, Risks, and Limitations
45
 
46
+ While Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases.
47
+ There is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.
 
 
 
48
 
 
49
 
 
50
 
51
  ## How to Get Started with the Model
52
 
53
+ You can run conversational inference using the Transformers Auto classes.
54
+ We highly recommend that you add Korean system prompt for better output.
55
+ Adjust the hyperparameters as you need.
56
+
57
+ ### Example Usage
58
+
59
+ ```python
60
+ import torch
61
+ from transformers import AutoTokenizer, AutoModelForCausalLM
62
+
63
+ device = (
64
+ "cuda:0" if torch.cuda.is_available() else # Nvidia GPU
65
+ "mps" if torch.backends.mps.is_available() else # Apple Silicon GPU
66
+ "cpu"
67
+ )
68
+
69
+ model_id = "PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct"
70
+
71
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
72
+ model = AutoModelForCausalLM.from_pretrained(
73
+ model_id,
74
+ torch_dtype=torch.bfloat16,
75
+ device=device,
76
+ )
77
+
78
+ ################################################################################
79
+ # Generation parameters
80
+ ################################################################################
81
+ num_return_sequences=1
82
+ max_new_tokens=1024
83
+ temperature=0.9
84
+ top_k=40
85
+ top_p=0.9
86
+ repetition_penalty=1.1
87
+
88
+ def generate_response(system ,user):
89
+ messages = [
90
+ {"role": "system", "content": system},
91
+ {"role": "user", "content": user}
92
+ ]
93
+ prompt = tokenizer.apply_chat_template(
94
+ messages,
95
+ tokenize=False,
96
+ add_generation_prompt=False
97
+ )
98
+
99
+ input_ids = tokenizer.encode(
100
+ prompt,
101
+ add_special_tokens=True,
102
+ return_tensors="pt"
103
+ ).to(device)
104
+
105
+ outputs = model.generate(
106
+ input_ids=input_ids,
107
+ pad_token_id=tokenizer.eos_token_id,
108
+ num_return_sequences=num_return_sequences,
109
+ max_new_tokens=max_new_tokens,
110
+ do_sample=True,
111
+ temperature=temperature,
112
+ top_k=top_k,
113
+ top_p=top_p,
114
+ repetition_penalty=repetition_penalty
115
+ )
116
+
117
+ return tokenizer.decode(outputs[0], skip_special_tokens=False)
118
+
119
+ system_prompt = "λ‹€μŒ μ§€μ‹œμ‚¬ν•­μ— λŒ€ν•œ 응닡을 μž‘μ„±ν•΄μ£Όμ„Έμš”."
120
+ user_prompt = "ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ— λŒ€ν•΄ μ„€λͺ…ν•΄μ£Όμ„Έμš”."
121
+ response = generate_response(system_prompt, user_prompt)
122
+ print(response)
123
+ ```
124
+
125
+ ### Example Output
126
+
127
+ ```python
128
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
129
+
130
+ λ‹€μŒ μ§€μ‹œμ‚¬ν•­μ— λŒ€ν•œ 응닡을 μž‘μ„±ν•΄μ£Όμ„Έμš”.<|eot_id|><|start_header_id|>user<|end_header_id|>
131
+
132
+ ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ— λŒ€ν•΄ μ„€λͺ…ν•΄μ£Όμ„Έμš”.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
133
+
134
+ ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ€ μˆ˜ν•™μ—μ„œ κ°€μž₯ 유λͺ…ν•œ μˆ˜μ—΄ 쀑 ν•˜λ‚˜λ‘œ, 0κ³Ό 1둜 μ‹œμž‘ν•˜λŠ” μˆ«μžλ“€μ˜ λͺ¨μž„μž…λ‹ˆλ‹€. 각 μˆ«μžλŠ” 이전 두 개의 숫자의 ν•©μœΌλ‘œ μ •μ˜λ˜λ©°, μ΄λ ‡κ²Œ 계속 λ°˜λ³΅λ©λ‹ˆλ‹€. ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ€ λ¬΄ν•œνžˆ μ»€μ§€λŠ”λ°, 첫 λ²ˆμ§Έμ™€ 두 번째 항이 λͺ¨λ‘ 0일 μˆ˜λ„ μžˆμ§€λ§Œ μΌλ°˜μ μœΌλ‘œλŠ” 첫 번째 항이 1이고 두 번째 항이 1μž…λ‹ˆλ‹€.
135
+
136
+ 예λ₯Ό λ“€μ–΄, 0 + 1 = 1, 1 + 1 = 2, 2 + 1 = 3, 3 + 2 = 5, 5 + 3 = 8, 8 + 5 = 13, 13 + 8 = 21, 21 + 13 = 34 등이 μžˆμŠ΅λ‹ˆλ‹€. 이 μˆ«μžλ“€μ„ ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ΄λΌκ³  ν•©λ‹ˆλ‹€.
137
+
138
+ ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ€ λ‹€λ₯Έ μˆ˜μ—΄λ“€κ³Ό ν•¨κ»˜ μ‚¬μš©λ  λ•Œ 도움이 λ©λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄, 금육 μ‹œμž₯μ—μ„œλŠ” 금리 수읡λ₯ μ„ λ‚˜νƒ€λ‚΄κΈ° μœ„ν•΄ 이 μˆ˜μ—΄μ΄ μ‚¬μš©λ©λ‹ˆλ‹€. λ˜ν•œ 컴퓨터 κ³Όν•™κ³Ό 컴퓨터 κ³Όν•™μ—μ„œλ„ μ’…μ’… 찾을 수 μžˆμŠ΅λ‹ˆλ‹€. ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ€ 맀우 λ³΅μž‘ν•˜λ©° λ§Žμ€ μˆ«μžκ°€ λ‚˜μ˜€λ―€λ‘œ 일반적인 μˆ˜μ—΄μ²˜λŸΌ μ‰½κ²Œ ꡬ할 수 μ—†μŠ΅λ‹ˆλ‹€. 이 λ•Œλ¬Έμ— ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ€ λŒ€μˆ˜μ  ν•¨μˆ˜μ™€ 관련이 있으며 μˆ˜ν•™μžλ“€μ€ 이λ₯Ό μ—°κ΅¬ν•˜κ³  κ³„μ‚°ν•˜κΈ° μœ„ν•΄ λ‹€μ–‘ν•œ μ•Œκ³ λ¦¬μ¦˜μ„ κ°œλ°œν–ˆμŠ΅λ‹ˆλ‹€.
139
+
140
+ 참고 자료: https://en.wikipedia.org/wiki/Fibonacci_sequence#Properties.<|eot_id|>
141
+ ```
142
+
143
 
 
144
 
145
  ## Training Details
146
 
147
  ### Training Data
148
 
149
+ The model is trained on the [MarkrAI/KoCommercial-Dataset](https://huggingface.co/datasets/MarkrAI/KoCommercial-Dataset), which consists of various commercial texts in Korean.
 
 
150
 
151
  ### Training Procedure
152
 
153
+ The model training used LoRA for computational efficiency. 0.02 billion parameters(0.26% of total parameters) were trained.
 
 
 
 
 
154
 
155
  #### Training Hyperparameters
156
 
157
+ ```python
158
+ ################################################################################
159
+ # bitsandbytes parameters
160
+ ################################################################################
161
+ load_in_4bit=True
162
+ bnb_4bit_compute_dtype=torch_dtype
163
+ bnb_4bit_quant_type="nf4"
164
+ bnb_4bit_use_double_quant=False
165
+
166
+ ################################################################################
167
+ # LoRA parameters
168
+ ################################################################################
169
+ task_type="CAUSAL_LM"
170
+ target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
171
+ r=8
172
+ lora_alpha=16
173
+ lora_dropout=0.05
174
+ bias="none"
175
+
176
+ ################################################################################
177
+ # TrainingArguments parameters
178
+ ################################################################################
179
+ num_train_epochs=1
180
+ per_device_train_batch_size=1
181
+ per_device_eval_batch_size=2
182
+ gradient_accumulation_steps=4
183
+ gradient_checkpointing=True
184
+ learning_rate=2e-5
185
+ lr_scheduler_type="cosine"
186
+ warmup_ratio=0.1
187
+ weight_decay=0.1
188
+
189
+ ################################################################################
190
+ # SFT parameters
191
+ ################################################################################
192
+ max_seq_length=1024
193
+ packing=True
194
+ ```
195
 
 
196
 
 
 
 
197
 
198
  ## Evaluation
199
 
 
227
 
228
 
229
 
230
+ ## Technical Specifications
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
231
 
232
  ### Compute Infrastructure
233
 
 
 
234
  #### Hardware
235
 
236
+ - **GPU:** NVIDIA GeForce RTX 4080 SUPER
237
 
238
  #### Software
239
 
240
+ - **Operating System:** Linux
241
+ - **Deep Learning Framework:** Hugging Face Transformers, PyTorch
242
 
243
+ ### Training Details
244
 
245
+ - **Training time:** 32 hours
246
+ - **VRAM usage:** 12.8 GB
247
+ - **GPU power usage:** 300 W
248
 
 
249
 
 
250
 
251
+ ## Citation
252
 
253
+ **Waktaverse-Llama-3**
254
 
255
+ ```
256
+ TBD
257
+ ```
258
 
259
+ **Llama-3**
260
 
261
+ ```
262
+ @article{llama3modelcard,
263
+ title={Llama 3 Model Card},
264
+ author={AI@Meta},
265
+ year={2024},
266
+ url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
267
+ }
268
+ ```
269
 
 
270
 
 
271
 
272
  ## Model Card Authors [optional]
273
 
274
  [More Information Needed]
275
 
276
+
277
+
278
  ## Model Card Contact
279
 
280
  [More Information Needed]