nebchi's picture
Update README.md
97c3a95 verified
|
raw
history blame
3.27 kB
metadata
library_name: transformers
tags:
  - pytorch
license: llama3
language:
  - ko
pipeline_tag: text-generation

solar-kor-resume

Update @ 2024.06.05: First release of Llama3-Ocelot-8B-instruct-v01

This model card corresponds to the 10.8B Instruct version of the Llama-Ko model.

The train wad done on A100-80GB

Resources and Technical Documentation:

Citation

Model Developers: frcp, nebchi, pepperonipizza97

Model Information

It is an LLM model capable of generating Korean text, trained on a pre-trained base model with high-quality Korean SFT dataset and DPO dataset.

Inputs and outputs

  • Input: Text string, such as a question, a prompt, or a document to be summarized.
  • Output: Generated Korean-language text in response to the input, such as an answer to a question, or a summary of a document.

Running the model on a single / multi GPU

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("cpm-ai/Ocelot-Ko-self-instruction-10.8B-v1.0")
model = AutoModelForCausalLM.from_pretrained("cpm-ai/Ocelot-Ko-self-instruction-10.8B-v1.0", device_map="auto")

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=4096, streamer=streamer)

text = 'λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μΈκ°€μš”?'

messages = [
    {
        "role": "user",
        "content": "{}".format(text)
    }
]

prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

outputs = pipe(
    prompt,
    temperature=0.2,
    add_special_tokens=True
)
print(outputs[0]["generated_text"][len(prompt):])

results

λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈνŠΉλ³„μ‹œμž…λ‹ˆλ‹€.
μ„œμšΈνŠΉλ³„μ‹œμ—λŠ” μ²­μ™€λŒ€, κ΅­νšŒμ˜μ‚¬λ‹Ή, λŒ€λ²•μ› λ“± λŒ€ν•œλ―Όκ΅­μ˜ μ£Όμš” 정뢀기관이 μœ„μΉ˜ν•΄ μžˆμŠ΅λ‹ˆλ‹€.
λ˜ν•œ μ„œμšΈμ‹œλŠ” λŒ€ν•œλ―Όκ΅­μ˜ 경제, λ¬Έν™”, ꡐ윑, κ΅ν†΅μ˜ μ€‘μ‹¬μ§€λ‘œμ¨ λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„μ΄μž λŒ€ν‘œ λ„μ‹œμž…λ‹ˆλ‹€.μ œκ°€ 도움이 λ˜μ—ˆκΈΈ λ°”λžλ‹ˆλ‹€. 더 κΆκΈˆν•œ 점이 μžˆμœΌμ‹œλ©΄ μ–Έμ œλ“ μ§€ λ¬Όμ–΄λ³΄μ„Έμš”!
@misc {cpm-ai/Ocelot-Ko-self-instruction-10.8B-v1.0,
    author       = { {frcp, nebchi, pepperonipizza97} },
    title        = { solar-kor-resume},
    year         = 2024,
    url          = { https://huggingface.co/cpm-ai/Ocelot-Ko-self-instruction-10.8B-v1.0 },
    publisher    = { Hugging Face }
}

Results in LogicKor* are as follows:

Model Single turn* Multi turn* Overall*
gemini-1.5-pro-preview-0215 7.90 6.26 7.08
xionic-1-72b-20240404 7.23 6.28 6.76
Ocelot-Instruct 6.79 6.71 6.75
allganize/Llama-3-Alpha-Ko-8B-Instruct 7.14 6.09 6.61