Edit model card

Easy-Systems/easy-ko-Llama3-8b-Instruct-v1

DALL-E둜 μƒμ„±ν•œ μ΄λ―Έμ§€μž…λ‹ˆλ‹€.

  • (μ£Ό)μ΄μ§€μ‹œμŠ€ν…œμ˜ 첫번째 LLM λͺ¨λΈμΈ easy-ko-Llama3-8b-Instruct-v1은 μ˜μ–΄ 기반 λͺ¨λΈμΈ meta-llama/Meta-Llama-3-8B-Instructλ₯Ό 베이슀둜 ν•˜μ—¬ ν•œκ΅­μ–΄ νŒŒμΈνŠœλ‹ 된 λͺ¨λΈμž…λ‹ˆλ‹€.
  • LLM λͺ¨λΈμ€ μΆ”ν›„ μ§€μ†μ μœΌλ‘œ μ—…λ°μ΄νŠΈ 될 μ˜ˆμ • μž…λ‹ˆλ‹€.

Data

  • AI hub (https://www.aihub.or.kr/) 데이터λ₯Ό λ‹€μ–‘ν•œ Task (QA, Summary, Translate λ“±)둜 κ°€κ³΅ν•˜μ—¬ νŒŒμΈνŠœλ‹μ— μ‚¬μš©.
  • 사내 자체 κ°€κ³΅ν•œ 데이터λ₯Ό ν™œμš©ν•˜μ—¬ νŒŒμΈνŠœλ‹μ— μ‚¬μš©.

How to use

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "Easy-Systems/easy-ko-Llama3-8b-Instruct-v1"

model = AutoModelForCausalLM.from_pretrained(model_id,                                            
                                             attn_implementation="flash_attention_2",
                                             torch_dtype=torch.bfloat16,
                                             device_map="auto")

tokenizer = AutoTokenizer.from_pretrained(model_id, add_special_tokens=True)

prompt="λ¦¬λˆ…μŠ€ ν”„λ‘œμ„ΈμŠ€λ₯Ό κ°•μ œλ‘œ μ’…λ£Œν•˜λŠ” 방법은?"
messages = [  
    {"role": "system", "content": "당신은 μΉœμ ˆν•œ AI chatbot μž…λ‹ˆλ‹€. μš”μ²­μ— λŒ€ν•΄μ„œ step-by-step 으둜 κ°„κ²°ν•˜κ²Œ ν•œκ΅­μ–΄(Korean)둜 λ‹΅λ³€ν•΄μ£Όμ„Έμš”."},
    {"role": "user", "content": f"\n\n### λͺ…λ Ήμ–΄: {prompt}\n\n### 응닡:"}   
]

input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = model.generate(
    input_ids,
    max_new_tokens=1024,
    eos_token_id=terminators,
    pad_token_id=tokenizer.eos_token_id,
    do_sample=True,
    temperature=0.2,
    repetition_penalty = 1.3,
    top_p=0.9,
    top_k=10,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True).strip())

Example Output

λ¦¬λˆ…μŠ€μ˜ 경우, `kill` λ˜λŠ” `pkill` λͺ…령을 μ‚¬μš©ν•˜μ—¬ νŠΉμ • ν”„λ‘œμ„ΈμŠ€λ₯Ό κ°•μ œλ‘œ μ’…λ£Œν•  수 μžˆμŠ΅λ‹ˆλ‹€.

1단계: ps -ef | grep <ν”„λ‘œμ„ΈμŠ€_이름>`으둜 ν˜„μž¬ μ‹€ν–‰ 쀑인 λͺ¨λ“  ν”„λ‘œμ„ΈμŠ€κ°€ ν‘œμ‹œλ©λ‹ˆλ‹€.
2단계: kill <ν”„λ‘œμ„ΈμŠ€_ID>`λ₯Ό μž…λ ₯ν•˜λ©΄ ν•΄λ‹Ή ν”„λ‘œμ„ΈμŠ€κ°€ μ¦‰μ‹œ μ’…λ£Œλ©λ‹ˆλ‹€.

λ˜λŠ” `-9`(SIGKILL μ‹ ν˜Έ)λ₯Ό μ§€μ •ν•˜μ—¬ ν”„λ‘œμ„ΈμŠ€λ₯Ό κ°•μ œλ‘œ μ’…λ£Œν•˜λ„λ‘ ν•  μˆ˜λ„ 있으며, μ΄λŠ” 운영 μ²΄μ œμ—μ„œ μ •μƒμ μœΌλ‘œ μ’…λ£Œν•˜κΈ° 전에 λ§ˆμ§€λ§‰ 기회λ₯Ό 주지 μ•Šκ³  λ°”λ‘œ 죽게 λ©λ‹ˆλ‹€:

3단계: kill -9 <ν”„λ‘œμ„ΈμŠ€_ID>`λ₯Ό μž…λ ₯ν•©λ‹ˆλ‹€.

참고둜, μ‹œμŠ€ν…œμ˜ μ•ˆμ •μ„ μœ„ν•΄ ν•„μš”ν•œ νŒŒμΌμ΄λ‚˜ μ„œλΉ„μŠ€κ°€ μžˆλŠ” κ²½μš°μ—λŠ” 직접 μ‚­μ œν•˜μ§€ 말아야 ν•˜λ©°, μ μ ˆν•œ κΆŒν•œκ³Ό μ§€μ‹œμ— 따라 μ²˜λ¦¬ν•΄μ•Ό ν•©λ‹ˆλ‹€. λ˜ν•œ 일뢀 ν”„λ‘œκ·Έλž¨λ“€μ€ κ°•μ œμ’…λ£Œ μ‹œ 데이터 손싀 λ“±μ˜ λ¬Έμ œκ°€ λ°œμƒν•  κ°€λŠ₯성이 μžˆμœΌλ―€λ‘œ 미리 μ €μž₯된 μž‘μ—… λ‚΄μš© 등을 ν™•μΈν•˜κ³  μ’…λ£Œν•˜μ‹œκΈ° λ°”λžλ‹ˆλ‹€. 

License

  • Creative Commons Attribution-NonCommercial-ShareAlike 4.0 (CC-BY-NC-SA-4.0)
  • 상업적 μ‚¬μš© μ‹œ, μ•„λž˜μ˜ μ—°λ½μ²˜λ‘œ λ¬Έμ˜ν•΄μ£Όμ‹œκΈ° λ°”λžλ‹ˆλ‹€.

Contact

  • 상업적 μ‚¬μš© λ˜λŠ” 기타 문의 사항에 λŒ€ν•˜μ—¬ μ—°λ½ν•˜μ‹œλ €λ©΄ λ‹€μŒ μ΄λ©”μΌλ‘œ 연락 μ£Όμ‹­μ‹œμ˜€.
  • κ°•ν˜„κ΅¬: [email protected]
Downloads last month
2,911
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Easy-Systems/easy-ko-Llama3-8b-Instruct-v1

Quantizations
1 model

Spaces using Easy-Systems/easy-ko-Llama3-8b-Instruct-v1 6