|
--- |
|
language: |
|
- en |
|
- ko |
|
license: llama3 |
|
library_name: transformers |
|
datasets: |
|
- legacy-datasets/wikipedia |
|
pipeline_tag: text-generation |
|
--- |
|
## Model Details |
|
|
|
This model was continually pretrained from the [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B), using English and Korean datasets. |
|
The goal is to enhance its proficiency in Korean while maintaining its English language capabilities from the original model. |
|
|
|
### Datasets |
|
|
|
We sampled 16B tokens from the following datasets for training: |
|
|
|
<table> |
|
<tr> |
|
<td><strong>Sources</strong> |
|
</td> |
|
<td><strong>Tokens (Llama-3-8B)</strong> |
|
</td> |
|
</tr> |
|
<tr> |
|
<td>AI-Hub |
|
</td> |
|
<td>9.2B |
|
</td> |
|
</tr> |
|
<tr> |
|
<td>Modu Corpus |
|
</td> |
|
<td>5.8B |
|
</td> |
|
</tr> |
|
<tr> |
|
<td>Wikipedia |
|
</td> |
|
<td>5.4B |
|
</td> |
|
</tr> |
|
</table> |
|
|
|
### Hyperparameters |
|
|
|
<table> |
|
<tr> |
|
<td><strong>Learning rate</strong></td> |
|
<td><strong>Optimizer</strong></td> |
|
<td><strong>Betas</strong></td> |
|
<td><strong>Weight decay</strong></td> |
|
<td><strong>Warm-up ratio</strong></td> |
|
</tr> |
|
<tr> |
|
<td>3e-5</td> |
|
<td>AdamW</td> |
|
<td>(0.9, 0.95)</td> |
|
<td>0.1</td> |
|
<td>0.05</td> |
|
</tr> |
|
</table> |
|
|
|
## Intended Use |
|
|
|
This model has not been fine-tuned, so you will need to train it on your own dataset before using it. |
|
|
|
## Evaluations |
|
|
|
We evaluated this model using both English and Korean benchmarks, and compared it with similar models that were continually pretrained from the [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). |
|
|
|
<table> |
|
<tr> |
|
<td></td> |
|
<td colspan="4"><strong>English</strong></td> |
|
<td colspan="3"><strong>Korean</strong></td> |
|
</tr> |
|
<tr> |
|
<td><strong>Model</strong></td> |
|
<td><strong>MMLU (5 shots)</strong></td> |
|
<td><strong>HellaSwag (10 shots)</strong></td> |
|
<td><strong>GSM8K (8 shots, CoT)</strong></td> |
|
<td><strong>BBH (3 shots, CoT)</strong></td> |
|
<td><strong>KMMLU (5 shots)</strong></td> |
|
<td><strong>HAE-RAE (5 shots)</strong></td> |
|
<td><strong>KoBEST (5 shots)</strong></td> |
|
</tr> |
|
<tr> |
|
<td>meta-llama/Meta-Llama-3-8B</td> |
|
<td><strong>65.1</strong></td> |
|
<td><strong>82.1</strong></td> |
|
<td><strong>52.0</strong></td> |
|
<td><strong>61.9</strong></td> |
|
<td>40.2</td> |
|
<td>61.1</td> |
|
<td>69.2</td> |
|
</tr> |
|
<tr> |
|
<td>saltlux/Ko-Llama3-Luxia-8B</td> |
|
<td>57.1</td> |
|
<td>77.1</td> |
|
<td>32.3</td> |
|
<td>51.8</td> |
|
<td>39.4</td> |
|
<td>69.2</td> |
|
<td>71.9</td> |
|
</tr> |
|
<tr> |
|
<td>beomi/Llama-3-Open-Ko-8B</td> |
|
<td>56.2</td> |
|
<td>77.4</td> |
|
<td>31.5</td> |
|
<td>46.8</td> |
|
<td>40.3</td> |
|
<td>68.1</td> |
|
<td><u>72.1</u></td> |
|
</tr> |
|
<tr> |
|
<td>beomi/Llama-3-KoEn-8B</td> |
|
<td>52.5</td> |
|
<td>77.7</td> |
|
<td>21.2</td> |
|
<td>43.2</td> |
|
<td><u>40.8</u></td> |
|
<td><u>71.3</u></td> |
|
<td><strong>73.8</strong></td> |
|
</tr> |
|
<tr> |
|
<td><strong>tesser-ai/Tesser-Llama-3-Ko-8B</strong></td> |
|
<td><u>60.5</u></td> |
|
<td><u>79.8</u></td> |
|
<td><u>40.3</u></td> |
|
<td><u>56.3</u></td> |
|
<td><strong>42.5</strong></td> |
|
<td><strong>72.1</strong></td> |
|
<td><strong>73.8</strong></td> |
|
</tr> |
|
</table> |
|
|
|
## Limitations |
|
|
|
We trained this model using a context length of 4k due to resource limitations and to maximize training speed. |
|
However, the original model was trained with a context length of 8k, so an 8k context length could work well in downstream tasks. |
|
|
|
## License |
|
|
|
This model follows the original [Llama-3 license](https://llama.meta.com/llama3/license/). |