open-llama-2-ko-7b / README.md
beomi's picture
Update README.md
84ae877
|
raw
history blame
4.16 kB
metadata
language:
  - ko
  - en
pipeline_tag: text-generation
inference: false
tags:
  - facebook
  - meta
  - pytorch
  - llama
  - llama-2
  - kollama
  - llama-2-ko
license: mit
library_name: transformers

Update Log

  • 2023.12.14: Initial Release of Open-Llama-2-Ko

Open-Llama-2-Ko ๐Ÿฆ™๐Ÿ‡ฐ๐Ÿ‡ท

Open-Llama-2-Ko represents an advanced iteration of the Llama 2 model, featuring an expanded vocabulary and the inclusion of a Korean corpus for enhanced pretraining. Similar to its predecessor, Llama-2-Ko, this model operates within the range of generative text models, with parameter counts ranging from 7 billion to 70 billion. The focus of this repository is on the 7B pretrained version, designed to integrate seamlessly with the Hugging Face Transformers format.

The primary distinction between the Llama-2-Ko Series and Open-Llama-2-Ko lies in the dataset. Open-Llama-2-Ko exclusively utilizes publicly accessible Korean corpora, including sources such as AI Hub, Modu Corpus, ๋ชจ๋‘์˜ ๋ง๋ญ‰์น˜, and Korean Wikipedia.

As training was conducted solely with publicly available corpora, this model is open for unrestricted use by everyone, adhering to the MIT License.

Model Details

Model Developers: Junbum Lee (Beomi)

Variations: Open-Llama-2-Ko will be available in different parameter sizes โ€” 7B and 13B โ€” along with various pretrained options.

Input: The model accepts only text input.

Output: The model produces text output exclusively.

Model Architecture:

Open-Llama-2-Ko is an auto-regressive language model that leverages an optimized transformer architecture derived from Llama-2.

Training Data Parameters Content Length GQA Tokens Learning Rate
Llama 2 A curated mix of Publicly Accessible Korean Corpora 7B 2k โœ˜ >15B* 5e-5

Training Corpus

The model was trained using selected datasets from AIHub and Modu Corpus. Detailed information about the training datasets is available below:

  • AI Hub: corpus/AI_HUB
    • Only the Training segment of the data was used.
    • The Validation and Test segments were deliberately excluded.
  • Modu Corpus: corpus/MODU_CORPUS

The final JSONL dataset used to train this model is approximately 61GB in size.

Total token count: Approximately 15 billion tokens (*using the expanded tokenizer. With the original Llama tokenizer, >60 billion tokens.)

Vocab Expansion

Model Name Vocabulary Size Description
Original Llama-2 32000 Sentencepiece BPE
Expanded Llama-2-Ko 46336 Sentencepiece BPE. Added Korean vocab and merges

Tokenizing "์•ˆ๋…•ํ•˜์„ธ์š”, ์˜ค๋Š˜์€ ๋‚ ์”จ๊ฐ€ ์ข‹๋„ค์š”."

Model Tokens
Llama-2 ['โ–', '์•ˆ', '<0xEB>', '<0x85>', '<0x95>', 'ํ•˜', '์„ธ', '์š”', ',', 'โ–', '์˜ค', '<0xEB>', '<0x8A>', '<0x98>', '์€', 'โ–', '<0xEB>', '<0x82>', '<0xA0>', '์”จ', '๊ฐ€', 'โ–', '<0xEC>', '<0xA2>', '<0x8B>', '<0xEB>', '<0x84>', '<0xA4>', '์š”']
Llama-2-Ko ['โ–์•ˆ๋…•', 'ํ•˜์„ธ์š”', ',', 'โ–์˜ค๋Š˜์€', 'โ–๋‚ ', '์”จ๊ฐ€', 'โ–์ข‹๋„ค์š”']

Tokenizing "Llama 2: Open Foundation and Fine-Tuned Chat Models"

Model Tokens
Llama-2 ['โ–L', 'l', 'ama', 'โ–', '2', ':', 'โ–Open', 'โ–Foundation', 'โ–and', 'โ–Fine', '-', 'T', 'un', 'ed', 'โ–Ch', 'at', 'โ–Mod', 'els']
Llama-2-Ko ['โ–L', 'l', 'ama', 'โ–', '2', ':', 'โ–Open', 'โ–Foundation', 'โ–and', 'โ–Fine', '-', 'T', 'un', 'ed', 'โ–Ch', 'at', 'โ–Mod', 'els']

Model Benchmark

LM Eval Harness - Korean (polyglot branch)

TBD

Citation

TBD

Acknowledgements