Model Developers HyunseokLee, TaeyoungKim - (kaist alinlab, omnious.ai)
Input Models input text only.
Output Models generate text only.
Model Architecture
ko-en-llama2-13b is an auto-regressive language model based on the LLaMA2 transformer architecture.
Base Model
Llama-2-13B
Training Dataset
Open dataset wiki and AIhub (English + Korean).
Training Objective
We trained the model to learn Korean corpus while maintaining Llama's English ability.
(still training)
- Downloads last month
- 4,418
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.