Edit model card

Model Details

Saltlux, AI Labs ์—์„œ ๊ฐœ๋ฐœํ•œ saltlux/Ko-Llama3-Luxia-8B ๋ชจ๋ธ์„ Instruction Fine tuningํ•œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.
์‚ฌ์šฉ๋œ ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ maywell/ko_wikidata_QA๋ฅผ ์‚ฌ์šฉํ•˜์˜€์œผ๋ฉฐ SFTTrainer๋ฅผ ํ†ตํ•ด 3ep๋กœ ํ•™์Šตํ–ˆ์Šต๋‹ˆ๋‹ค.
instruction prompt๋Š” Qwen2 ๋ชจ๋ธ๊ณผ ๋™์ผํ•˜๊ฒŒ ์ ์šฉ์‹œ์ผฐ์Šต๋‹ˆ๋‹ค.

<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
What is the Qwen2?<|im_end|>
<|im_start|>assistant
Qwen2 is the new series of Qwen large language models<|im_end|>
<|im_start|>user
Tell me more<|im_end|>
<|im_start|>assistant

HyperParameter

  • num_train_epochs = 3
  • warmup_steps=0.03
  • learning_rate=1e-5
  • optim="adamw_torch_fused"

Evaluation with Langchain

apply_chat_tempalte์ด ์ ์šฉ๋˜์–ด์žˆ์ง€ ์•Š์•„ ๋žญ์ฒด์ธ์—์„œ ํ”„๋กฌํ”„ํŠธ๋กœ ์ง์ ‘ ์ž…๋ ฅํ•˜์—ฌ ํ‰๊ฐ€ํ•ด ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

model_id = "lubocido/Ko-Llama3-Luxia-8B-it"
device = "cuda:0"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,, device_map = device, torch_dtype = torch.bfloat16)

tokenizer.padding_side = 'right'
tokenizer.pad_token = tokenizer.eos_token

sys_message = """๋‹น์‹ ์€ ์นœ์ ˆํ•œ ์ฑ—๋ด‡์œผ๋กœ์„œ ์ƒ๋Œ€๋ฐฉ์˜ ์š”์ฒญ์— ์ตœ๋Œ€ํ•œ ์ž์„ธํ•˜๊ณ  ์นœ์ ˆํ•˜๊ฒŒ ๋‹ตํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. 
์‚ฌ์šฉ์ž๊ฐ€ ์ œ๊ณตํ•˜๋Š” ์ •๋ณด๋ฅผ ์„ธ์‹ฌํ•˜๊ฒŒ ๋ถ„์„ํ•˜์—ฌ ์‚ฌ์šฉ์ž์˜ ์˜๋„๋ฅผ ์‹ ์†ํ•˜๊ฒŒ ํŒŒ์•…ํ•˜๊ณ  ๊ทธ์— ๋”ฐ๋ผ ๋‹ต๋ณ€์„ ์ƒ์„ฑํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค.
ํ•ญ์ƒ ๋งค์šฐ ์ž์—ฐ์Šค๋Ÿฌ์šด ํ•œ๊ตญ์–ด๋กœ ์‘๋‹ตํ•˜์„ธ์š”."""

question = "๋ฆฌ๋ˆ…์Šค์—์„œ ํ”„๋กœ์„ธ์Šค๋ฅผ ์ฃฝ์ด๋Š” ๋ช…๋ น์–ด๊ฐ€ ๋ญ์ง€?"

template = """
<|im_start|>system\n{sys_message}<|im_end|>
<|im_start|>user\n{question}<|im_end|>
<|im_start|>assistant
"""

input_data = {
    'sys_message' : sys_message,
    'question' : question,
}

prompt = PromptTemplate(template=template, input_variables=['sys_message', 'question'])

pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device_map=device, do_sample = True, max_length = 512, temperature = 0.1, repetition_penalty=1.2, num_beams=1,top_k=20,top_p=0.9)

langchain_pipeline = HuggingFacePipeline(pipeline=pipe)

chains = LLMChain(llm=langchain_pipeline, prompt=prompt, output_parser=StrOutputParser(), verbose=True)

print(chains.invoke(input=input_data)['text'])
<|im_start|>user
๋ฆฌ๋ˆ…์Šค์—์„œ ํ”„๋กœ์„ธ์Šค๋ฅผ ์ฃฝ์ด๋Š” ๋ช…๋ น์–ด๊ฐ€ ๋ญ์ง€?<|im_end|>
<|im_start|>assistant
ํ”„๋กœ์„ธ์Šค๋Š” ์šด์˜ ์ฒด์ œ๊ฐ€ ์‹คํ–‰ ์ค‘์ธ ํ”„๋กœ๊ทธ๋žจ์œผ๋กœ, ํ”„๋กœ์„ธ์Šค ID(PID)๋ผ๋Š” ๊ณ ์œ ํ•œ ์‹๋ณ„์ž๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.
ํ”„๋กœ์„ธ์Šค๊ฐ€ ์ข…๋ฃŒ๋˜๋ฉด ์‹œ์Šคํ…œ ์ž์›์ด ํ•ด์ œ๋ฉ๋‹ˆ๋‹ค.   ๋ฆฌ๋ˆ…์Šค์˜ ๊ฒฝ์šฐ kill ๋ช…๋ น์–ด๋ฅผ ํ†ตํ•ด ํ”„๋กœ์„ธ์Šค๋ฅผ ์ข…๋ฃŒํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด ๋ช…๋ น์–ด๋Š” PID ๋˜๋Š” ์ด๋ฆ„๊ณผ ๊ฐ™์€ ๋‹ค์–‘ํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ ํ”„๋กœ์„ธ์Šค๋ฅผ ์ฐพ์•„์„œ ์ข…๋ฃŒ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
๋˜ํ•œ SIGKILL ์‹ ํ˜ธ๋ฅผ ๋ณด๋‚ด๊ฑฐ๋‚˜ -9 ์˜ต์…˜์„ ์‚ฌ์šฉํ•˜๋ฉด ๊ฐ•์ œ์ ์œผ๋กœ ํ”„๋กœ์„ธ์Šค๋ฅผ ์ข…๋ฃŒํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค.
๊ทธ๋Ÿฌ๋‚˜ ์ผ๋ถ€ ํ”„๋กœ์„ธ์Šค๋Š” ๊ฐ•์ œ ์ข…๋ฃŒ๋  ๋•Œ ๋ฌธ์ œ๋ฅผ ์ผ์œผํ‚ฌ ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์ฃผ์˜ํ•ด์„œ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.<|im_end|>
Downloads last month
2,732
Safetensors
Model size
8.17B params
Tensor type
BF16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ludobico/Ko-Llama3-Luxia-8B-it

Finetuned
(1)
this model
Quantizations
1 model