π§ Note: this repo is under construction π§
Llama-2-Ko π¦π°π·
Llama-2-Ko serves as an advanced iteration of Llama 2, benefiting from an expanded vocabulary and the inclusion of a Korean corpus in its further pretraining. Just like its predecessor, Llama-2-Ko operates within the broad range of generative text models that stretch from 7 billion to 70 billion parameters. This repository focuses on the 70B pretrained version, which is tailored to fit the Hugging Face Transformers format. For access to the other models, feel free to consult the index provided below.
Model Details
Model Developers Junbum Lee (Beomi)
Variations Llama-2-Ko will come in a range of parameter sizes β 7B, 13B, and 70B β as well as pretrained and fine-tuned variations.
Input Models input text only.
Output Models generate text only.
Usage
Use with 8bit inference
- Requires > 74GB vram (compatible with 4x RTX 3090/4090 or 1x A100/H100 80G or 2x RTX 6000 ada/A6000 48G)
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_8bit = AutoModelForCausalLM.from_pretrained(
"beomi/llama-2-ko-70b",
load_in_8bit=True,
device_map="auto",
)
tk = AutoTokenizer.from_pretrained('beomi/llama-2-ko-70b')
pipe = pipeline('text-generation', model=model_8bit, tokenizer=tk)
def gen(x):
gended = pipe(f"### Title: {x}\n\n### Contents:", # Since it this model is NOT finetuned with Instruction dataset, it is NOT optimal prompt.
max_new_tokens=300,
top_p=0.95,
do_sample=True,
)[0]['generated_text']
print(len(gended))
print(gended)
Use with bf16 inference
- Requires > 150GB vram (compatible with 8x RTX 3090/4090 or 2x A100/H100 80G or 4x RTX 6000 ada/A6000 48G)
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model = AutoModelForCausalLM.from_pretrained(
"beomi/llama-2-ko-70b",
device_map="auto",
)
tk = AutoTokenizer.from_pretrained('beomi/llama-2-ko-70b')
pipe = pipeline('text-generation', model=model, tokenizer=tk)
def gen(x):
gended = pipe(f"### Title: {x}\n\n### Contents:", # Since it this model is NOT finetuned with Instruction dataset, it is NOT optimal prompt.
max_new_tokens=300,
top_p=0.95,
do_sample=True,
)[0]['generated_text']
print(len(gended))
print(gended)
Model Architecture
Llama-2-Ko is an auto-regressive language model that uses an optimized transformer architecture based on Llama-2.
Training Data | Params | Content Length | GQA | Tokens | LR | |
---|---|---|---|---|---|---|
Llama-2-Ko 70B | A new mix of Korean online data | 70B | 4k | β | >20B | 1e-5 |
*Plan to train upto 300B tokens |
Vocab Expansion
Model Name | Vocabulary Size | Description |
---|---|---|
Original Llama-2 | 32000 | Sentencepiece BPE |
Expanded Llama-2-Ko | 46592 | Sentencepiece BPE. Added Korean vocab and merges |
*Note: Llama-2-Ko 70B uses 46592 not 46336 (7B), will update new 7B model soon. |
Tokenizing "μλ νμΈμ, μ€λμ λ μ¨κ° μ’λ€μ. γ γ "
Model | Tokens |
---|---|
Llama-2 | ['β', 'μ', '<0xEB>', '<0x85>', '<0x95>', 'ν', 'μΈ', 'μ', ',', 'β', 'μ€', '<0xEB>', '<0x8A>', '<0x98>', 'μ', 'β', '<0xEB>', '<0x82>', '<0xA0>', 'μ¨', 'κ°', 'β', '<0xEC>', '<0xA2>', '<0x8B>', '<0xEB>', '<0x84>', '<0xA4>', 'μ', '.', 'β', '<0xE3>', '<0x85>', '<0x8E>', '<0xE3>', '<0x85>', '<0x8E>'] |
Llama-2-Ko *70B | ['βμλ
', 'νμΈμ', ',', 'βμ€λμ', 'βλ ', 'μ¨κ°', 'βμ’λ€μ', '.', 'β', 'γ
', 'γ
'] |
Tokenizing "Llama 2: Open Foundation and Fine-Tuned Chat Models"
Model | Tokens |
---|---|
Llama-2 | ['βL', 'l', 'ama', 'β', '2', ':', 'βOpen', 'βFoundation', 'βand', 'βFine', '-', 'T', 'un', 'ed', 'βCh', 'at', 'βMod', 'els'] |
Llama-2-Ko 70B | ['βL', 'l', 'ama', 'β', '2', ':', 'βOpen', 'βFoundation', 'βand', 'βFine', '-', 'T', 'un', 'ed', 'βCh', 'at', 'βMod', 'els'] |
Model Benchmark
LM Eval Harness - Korean (polyglot branch)
- Used EleutherAI's lm-evaluation-harness https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot
TBD
Note for oobabooga/text-generation-webui
Remove ValueError
at load_tokenizer
function(line 109 or near), in modules/models.py
.
diff --git a/modules/models.py b/modules/models.py
index 232d5fa..de5b7a0 100644
--- a/modules/models.py
+++ b/modules/models.py
@@ -106,7 +106,7 @@ def load_tokenizer(model_name, model):
trust_remote_code=shared.args.trust_remote_code,
use_fast=False
)
- except ValueError:
+ except:
tokenizer = AutoTokenizer.from_pretrained(
path_to_model,
trust_remote_code=shared.args.trust_remote_code,
Since Llama-2-Ko uses FastTokenizer provided by HF tokenizers NOT sentencepiece package,
it is required to use use_fast=True
option when initialize tokenizer.
Apple Sillicon does not support BF16 computing, use CPU instead. (BF16 is supported when using NVIDIA GPU)
LICENSE
- Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT
- Full License available at: https://huggingface.co/beomi/llama-2-ko-70b/blob/main/LICENSE
- For Commercial Usage, contact Author.
Citation
@misc {l._junbum_2023,
author = { {L. Junbum} },
title = { llama-2-ko-70b },
year = 2023,
url = { https://huggingface.co/beomi/llama-2-ko-70b },
doi = { 10.57967/hf/1130 },
publisher = { Hugging Face }
}
Acknowledgement
The training is supported by TPU Research Cloud program.
- Downloads last month
- 0