metadata
language:
- ko
library_name: transformers
pipeline_tag: text-generation
Model Developers HyunseokLee, TaeyoungKim - (kaist alinlab, omnious.ai)
Input Models input text only.
Output Models generate text only.
Model Architecture
ko-en-llama2-13b is an auto-regressive language model based on the LLaMA2 transformer architecture.
Base Model
Llama-2-13B
Training Dataset
Open dataset wiki and AIhub (English + Korean).
Training Objective
We trained the model to learn Korean corpus while maintaining Llama's English ability.
(still training)
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 48.61 |
ARC (25-shot) | 58.19 |
HellaSwag (10-shot) | 81.89 |
MMLU (5-shot) | 52.02 |
TruthfulQA (0-shot) | 39.96 |
Winogrande (5-shot) | 74.82 |
GSM8K (5-shot) | 0.76 |
DROP (3-shot) | 32.61 |