metadata
library_name: transformers
tags:
- pytorch
license: llama3
language:
- ko
pipeline_tag: text-generation
solar-kor-resume
Update @ 2024.06.05: First release of Llama3-Ocelot-8B-instruct-v01
This model card corresponds to the 10.8B Instruct version of the Llama-Ko model.
The train wad done on A100-80GB
Resources and Technical Documentation:
Citation
Model Developers: frcp, nebchi, pepperonipizza97
Model Information
It is an LLM model capable of generating Korean text, trained on a pre-trained base model with high-quality Korean SFT dataset and DPO dataset.
Inputs and outputs
- Input: Text string, such as a question, a prompt, or a document to be summarized.
- Output: Generated Korean-language text in response to the input, such as an answer to a question, or a summary of a document.
Running the model on a single / multi GPU
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("cpm-ai/Ocelot-Ko-self-instruction-10.8B-v1.0")
model = AutoModelForCausalLM.from_pretrained("cpm-ai/Ocelot-Ko-self-instruction-10.8B-v1.0", device_map="auto")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=4096, streamer=streamer)
text = 'λνλ―Όκ΅μ μλλ μ΄λμΈκ°μ?'
messages = [
{
"role": "user",
"content": "{}".format(text)
}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(
prompt,
temperature=0.2,
add_special_tokens=True
)
print(outputs[0]["generated_text"][len(prompt):])
results
λνλ―Όκ΅μ μλλ μμΈνΉλ³μμ
λλ€.
μμΈνΉλ³μμλ μ²μλ, κ΅νμμ¬λΉ, λλ²μ λ± λνλ―Όκ΅μ μ£Όμ μ λΆκΈ°κ΄μ΄ μμΉν΄ μμ΅λλ€.
λν μμΈμλ λνλ―Όκ΅μ κ²½μ , λ¬Έν, κ΅μ‘, κ΅ν΅μ μ€μ¬μ§λ‘μ¨ λνλ―Όκ΅μ μλμ΄μ λν λμμ
λλ€.μ κ° λμμ΄ λμκΈΈ λ°λλλ€. λ κΆκΈν μ μ΄ μμΌμλ©΄ μΈμ λ μ§ λ¬Όμ΄λ³΄μΈμ!
@misc {cpm-ai/Ocelot-Ko-self-instruction-10.8B-v1.0,
author = { {frcp, nebchi, pepperonipizza97} },
title = { solar-kor-resume},
year = 2024,
url = { https://huggingface.co/cpm-ai/Ocelot-Ko-self-instruction-10.8B-v1.0 },
publisher = { Hugging Face }
}
Results in LogicKor* are as follows:
Model | Single turn* | Multi turn* | Overall* |
---|---|---|---|
gemini-1.5-pro-preview-0215 | 7.90 | 6.26 | 7.08 |
xionic-1-72b-20240404 | 7.23 | 6.28 | 6.76 |
Ocelot-Instruct | 6.79 | 6.71 | 6.75 |
allganize/Llama-3-Alpha-Ko-8B-Instruct | 7.14 | 6.09 | 6.61 |