Update README.md
Browse files
README.md
CHANGED
@@ -29,6 +29,8 @@ This repository focuses on the 7B pretrained version, which is tailored to fit t
|
|
29 |
The main difference between Llama-2-Ko Series and Open-Llama-2-Ko is the dataset, Open-Llama-2-Ko series only used publicly accessable Korean corpus,
|
30 |
including [AI Hub](https://www.aihub.or.kr), [Modu Corpus, 모두의 말뭉치](https://corpus.korean.go.kr/) and [Korean Wikipedia](https://dumps.wikimedia.org/kowiki/).
|
31 |
|
|
|
|
|
32 |
## Model Details
|
33 |
|
34 |
**Model Developers** Junbum Lee (Beomi)
|
@@ -45,7 +47,7 @@ Open-Llama-2-Ko is an auto-regressive language model that uses an optimized tran
|
|
45 |
|
46 |
||Training Data|Params|Content Length|GQA|Tokens|LR|
|
47 |
|---|---|---|---|---|---|---|
|
48 |
-
|Llama 2|*A new mix of Publicly Accessable Korean Corpus*|7B|
|
49 |
|
50 |
**Train Corpus**
|
51 |
|
|
|
29 |
The main difference between Llama-2-Ko Series and Open-Llama-2-Ko is the dataset, Open-Llama-2-Ko series only used publicly accessable Korean corpus,
|
30 |
including [AI Hub](https://www.aihub.or.kr), [Modu Corpus, 모두의 말뭉치](https://corpus.korean.go.kr/) and [Korean Wikipedia](https://dumps.wikimedia.org/kowiki/).
|
31 |
|
32 |
+
Since the train is done using only publicly available corpus, this model is opened to everyone without any restrictions. (*This model follows MIT License)
|
33 |
+
|
34 |
## Model Details
|
35 |
|
36 |
**Model Developers** Junbum Lee (Beomi)
|
|
|
47 |
|
48 |
||Training Data|Params|Content Length|GQA|Tokens|LR|
|
49 |
|---|---|---|---|---|---|---|
|
50 |
+
|Llama 2|*A new mix of Publicly Accessable Korean Corpus*|7B|2k|✗|>15B*|5e<sup>-5</sup>|
|
51 |
|
52 |
**Train Corpus**
|
53 |
|