File size: 10,169 Bytes
d4f6329 70d0f75 539bf0b 3d045fa cd0ac48 868fab1 1cde812 d4f6329 539bf0b d4f6329 539bf0b 1d550c8 539bf0b d4f6329 539bf0b 1550fc2 d4f6329 539bf0b d4f6329 539bf0b d4f6329 539bf0b d4f6329 539bf0b d4f6329 539bf0b d4f6329 539bf0b 199d637 539bf0b e6fbc1d 539bf0b 436b720 539bf0b 436b720 539bf0b 7b1ecff 539bf0b 89027c1 539bf0b a6b9359 539bf0b a6b9359 539bf0b a6b9359 539bf0b a6b9359 539bf0b d4f6329 539bf0b d4f6329 1550fc2 d4f6329 539bf0b 1550fc2 539bf0b 1550fc2 539bf0b 1550fc2 539bf0b 64aa228 e6fbc1d 539bf0b d4f6329 cd0ac48 d4f6329 cd0ac48 d4f6329 cd0ac48 d4f6329 539bf0b d4f6329 539bf0b d4f6329 539bf0b d4f6329 539bf0b d4f6329 539bf0b d4f6329 539bf0b d4f6329 539bf0b d4f6329 539bf0b 1550fc2 539bf0b d4f6329 539bf0b d4f6329 539bf0b d4f6329 1550fc2 d4f6329 49b60c7 d4f6329 436b720 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 |
---
language:
- ko
- en
license: llama3
library_name: transformers
tags:
- llama
- llama-3
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- MarkrAI/KoCommercial-Dataset
---
# Waktaverse-Llama-3-KO-8B-Instruct Model Card
## Model Details
![image/webp](https://cdn-uploads.huggingface.co/production/uploads/65d6e0640ff5bc0c9b69ddab/Va78DaYtPJU6xr4F6Ca4M.webp)
Waktaverse-Llama-3-KO-8B-Instruct is a Korean language model developed by Waktaverse AI team.
This large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks.
It is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses.
- **Developed by:** Waktaverse AI
- **Model type:** Large Language Model
- **Language(s) (NLP):** Korean, English
- **License:** [Llama3](https://llama.meta.com/llama3/license)
- **Finetuned from model:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
- **Tokenizer Soucrce:** [saltlux/Ko-Llama3-Luxia-8B](https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B)
## Model Sources
- **Repository:** [GitHub](https://github.com/PathFinderKR/Waktaverse-LLM/tree/main)
- **Paper :** [More Information Needed]
## Uses
### Direct Use
The model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.
### Out-of-Scope Use
This model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making.
Moreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.
## Bias, Risks, and Limitations
While Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases.
There is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.
## How to Get Started with the Model
You can run conversational inference using the Transformers Auto classes.
We highly recommend that you add Korean system prompt for better output.
Adjust the hyperparameters as you need.
### Example Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = (
"cuda:0" if torch.cuda.is_available() else # Nvidia GPU
"mps" if torch.backends.mps.is_available() else # Apple Silicon GPU
"cpu"
)
model_id = "PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map=device,
)
################################################################################
# Generation parameters
################################################################################
num_return_sequences=1
max_new_tokens=1024
temperature=0.9
top_k=0 # not recommended
top_p=0.9
repetition_penalty=1.1
def prompt_template(system, user):
return (
"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n"
f"{system}<|eot_id|>"
"<|start_header_id|>user<|end_header_id|>\n\n"
f"{user}<|eot_id|>"
"<|start_header_id|>assistant<|end_header_id|>\n\n"
)
def generate_response(system ,user):
prompt = prompt_template(system, user)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
).to(device)
outputs = model.generate(
input_ids=input_ids,
pad_token_id=tokenizer.eos_token_id,
num_return_sequences=num_return_sequences,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty
)
return tokenizer.decode(outputs[0], skip_special_tokens=False)
system_prompt = "λ€μ μ§μμ¬νμ λν μλ΅μ μμ±ν΄μ£ΌμΈμ."
user_prompt = "νΌλ³΄λμΉ μμ΄μ λν΄ μ€λͺ
ν΄μ£ΌμΈμ."
response = generate_response(system_prompt, user_prompt)
print(response)
```
### Example Output
```python
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
λ€μ μ§μμ¬νμ λν μλ΅μ μμ±ν΄μ£ΌμΈμ.<|eot_id|><|start_header_id|>user<|end_header_id|>
νΌλ³΄λμΉ μμ΄μ λν΄ μ€λͺ
ν΄μ£ΌμΈμ.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
νΌλ³΄λμΉ μμ΄μ 0κ³Ό 1λ‘ μμνλ©°, κ° νμ΄ μ΄μ λ νμ ν©μΌλ‘ κ³μ°λλ μμ΄μ
λλ€. μ΄ μμ΄μλ 무νν λ§μ μ«μκ° ν¬ν¨λμ΄ μμΌλ©°, 첫 λ²μ§Έ λͺ κ°μ νμ λ€μκ³Ό κ°μ΅λλ€:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 985, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418,...
νΌλ³΄λμΉ μμ΄μ μνμ ꡬ쑰μ μ¬κ· κ΄κ³λ₯Ό κ°μ§κ³ μμ΅λλ€. μλ₯Ό λ€μ΄, νΌλ³΄λμΉ μμ΄μ nλ²μ§Έ νμ (n-1)λ²μ§Έμ (n-2)λ²μ§Έ νμ ν©μ
λλ€.
νΌλ³΄λμΉ μμ΄μ μ¬λ¬ λΆμΌμμ μ¬μ©λ©λλ€. μλ₯Ό λ€μ΄, ννμ λ°°μΉ λ¬Έμ μμλ νΌλ³΄λμΉ μμ΄μ΄ μ¬μ©λ μ μμ΅λλ€. λν, κΈ°ννμμ νΌλ³΄λμΉ μμ΄μ μ μ λ°°μ΄μ μ¬μ©λ μ μμ΅λλ€.
νΌλ³΄λμΉ μμ΄μ μνμ λ μ¨μλ₯΄λ νΌλ³΄λμΉμ μ΄λ¦μ λ°μ λͺ
λͺ
λμμ΅λλ€. κ·Έλ μ΄ μμ΄μ μ²μ λ°κ²¬νκ³ κΈ°λ‘νμ΅λλ€. νΌλ³΄λμΉ μμ΄μ μ λ½μμ μΈκΈ°λ₯Ό λμμΌλ©°, λ€λ₯Έ λ¬Ένμμλ λ
νΉν ννλ‘ λνλ©λλ€.
νΌλ³΄λμΉ μμ΄μ μ»΄ν¨ν° νλ‘κ·Έλ¨κ³Ό μκ³ λ¦¬μ¦μλ μ μ©λ μ μμ΅λλ€. μλ₯Ό λ€μ΄, νΌλ³΄λμΉ μμ΄μ κ³μ°νλ μκ³ λ¦¬μ¦μ΄ μμ΅λλ€. μ΄λ¬ν μκ³ λ¦¬μ¦μ νμ¬κΉμ§ λ§€μ° ν¨μ¨μ μ΄λ©°, λκ·λͺ¨ κ³μ°μ μ¬μ©λ©λλ€. νΌλ³΄λμΉ μμ΄μ μνμ ꡬ쑰μ μ¬κ· κ΄κ³λ₯Ό κ°μ§κ³ μκΈ° λλ¬Έμ νλ‘κ·Έλλ° μΈμ΄μμλ μμ£Ό μ¬μ©λ©λλ€.
μμ½νλ©΄, νΌλ³΄λμΉ μμ΄μ μνμ ꡬ쑰μ μ¬κ· κ΄κ³λ₯Ό κ°μ§ μμ΄λ‘, λ€μν λΆμΌμμ μ¬μ©λκ³ μμ΅λλ€. μ΄ μμ΄μ μ»΄ν¨ν° νλ‘κ·Έλ¨κ³Ό μκ³ λ¦¬μ¦μλ μ μ©λ μ μμΌλ©°, λκ·λͺ¨ κ³μ°μ μ¬μ©λ©λλ€. νΌλ³΄λμΉ μμ΄μ μνμ λ μ¨μλ₯΄λ νΌλ³΄λμΉμ μ΄λ¦μ λ°μ λͺ
λͺ
λμμΌλ©°, κ·Έμ μ
μ μΌλ‘ μ λͺ
ν©λλ€.<|eot_id|>
```
## Training Details
### Training Data
The model is trained on the [MarkrAI/KoCommercial-Dataset](https://huggingface.co/datasets/MarkrAI/KoCommercial-Dataset), which consists of various commercial texts in Korean.
### Training Procedure
The model training used LoRA for computational efficiency. 0.04 billion parameters(0.51% of total parameters) were trained.
#### Training Hyperparameters
```python
################################################################################
# bitsandbytes parameters
################################################################################
load_in_4bit=True
bnb_4bit_compute_dtype=torch_dtype
bnb_4bit_quant_type="nf4"
bnb_4bit_use_double_quant=False
################################################################################
# LoRA parameters
################################################################################
task_type="CAUSAL_LM"
target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
r=16
lora_alpha=32
lora_dropout=0.1
bias="none"
################################################################################
# TrainingArguments parameters
################################################################################
num_train_epochs=2
per_device_train_batch_size=1
gradient_accumulation_steps=1
gradient_checkpointing=True
learning_rate=2e-5
lr_scheduler_type="cosine"
warmup_ratio=0.1
optim = "adamw_torch"
weight_decay=0.01
################################################################################
# SFT parameters
################################################################################
max_seq_length=1024
packing=True
```
## Evaluation
### Metrics
- **Ko-HellaSwag:**
- **Ko-MMLU:**
- **Ko-Arc:**
- **Ko-Truthful QA:**
- **Ko-CommonGen V2:**
### Results
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Waktaverse Llama 3 8B</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
</tr>
<tr>
<td>Ko-HellaSwag:
</td>
<td>0
</td>
<td>0
</td>
</tr>
<tr>
<td>Ko-MMLU:
</td>
<td>0
</td>
<td>0
</td>
</tr>
<tr>
<td>Ko-Arc:
</td>
<td>0
</td>
<td>0
</td>
</tr>
<tr>
<td>Ko-Truthful QA:
</td>
<td>0
</td>
<td>0
</td>
</tr>
<tr>
<td>Ko-CommonGen V2:
</td>
<td>0
</td>
<td>0
</td>
</table>
## Technical Specifications
### Compute Infrastructure
#### Hardware
- **GPU:** NVIDIA GeForce RTX 4080 SUPER
#### Software
- **Operating System:** Linux
- **Deep Learning Framework:** Hugging Face Transformers, PyTorch
### Training Details
- **Training time:** 32 hours
- **VRAM usage:** 12.8 GB
- **GPU power usage:** 300 W
## Citation
**Waktaverse-Llama-3**
```
@article{waktaversellama3modelcard,
title={Waktaverse Llama 3 Model Card},
author={AI@Waktaverse},
year={2024},
url = {https://huggingface.co/PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct}
```
**Llama-3**
```
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
**Ko-Llama3-Luxia-8B**
```
@article{kollama3luxiamodelcard,
title={Ko Llama 3 Luxia Model Card},
author={AILabs@Saltux},
year={2024},
url={https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B/blob/main/README.md}
}
```
## Model Card Authors
[PathFinderKR](https://github.com/PathFinderKR) |