Edit model card

FUT FUT CHAT BOT

  • μ˜€ν”ˆμ†ŒμŠ€ λͺ¨λΈμ— LLM fine tuning κ³Ό RAG λ₯Ό μ μš©ν•œ μƒμ„±ν˜• AI
  • 풋살에 λŒ€ν•œ 관심이 λ†’μ•„μ§€λ©΄μ„œ μˆ˜μš” λŒ€λΉ„ μž…λ¬Έμžλ₯Ό μœ„ν•œ 정보 제곡 μ„œλΉ„μŠ€κ°€ ν•„μš”ν•˜λ‹€κ³  느껴 μ œμž‘ν•˜κ²Œ 됨
  • ν’‹μ‚΄ ν”Œλž«νΌμ— μ‚¬μš©λ˜λŠ” ν’‹μ‚΄ 정보 λ„μš°λ―Έ 챗봇
  • 'ν•΄μš”μ²΄'둜 λ‹΅ν•˜λ©° λ¬Έμž₯ 끝에 'μ–Όλ§ˆλ“ μ§€ λ¬Όμ–΄λ³΄μ„Έμš”~ ν’‹ν’‹~!' 을 좜λ ₯함
  • train for 7h23m

HOW TO USE

  
  #!pip install transformers==4.40.0 accelerate
  import os
  import torch
  from transformers import AutoTokenizer, AutoModelForCausalLM
  
  model_id = 'Dongwookss/small_fut_final'
  
  tokenizer = AutoTokenizer.from_pretrained(model_id)
  model = AutoModelForCausalLM.from_pretrained(
      model_id,
      torch_dtype=torch.bfloat16,
      device_map="auto",
  )
  model.eval()

Query

from transformers import TextStreamer
PROMPT = '''Below is an instruction that describes a task. Write a response that appropriately completes the request.
μ œμ‹œν•˜λŠ” contextμ—μ„œλ§Œ λŒ€λ‹΅ν•˜κ³  context에 μ—†λŠ” λ‚΄μš©μ€ λͺ¨λ₯΄κ² λ‹€κ³  λŒ€λ‹΅ν•΄'''

messages = [
    {"role": "system", "content": f"{PROMPT}"},
    {"role": "user", "content": f"{instruction}"}
    ]

input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

text_streamer = TextStreamer(tokenizer)
_ = model.generate(
    input_ids,
    max_new_tokens=4096,
    eos_token_id=terminators,
    do_sample=True,
    streamer = text_streamer,
    temperature=0.6,
    top_p=0.9,
    repetition_penalty = 1.1
)

Model Details

Model Description

This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated.

  • Developed by: Dongwookss
  • Model type: text generation
  • Language(s) (NLP): Korean
  • Finetuned from model : HuggingFaceH4/zephyr-7b-beta

Data

https://huggingface.co/datasets/mintaeng/llm_futsaldata_yo

ν•™μŠ΅ 데이터셋은 beomi/KoAlpaca-v1.1a λ₯Ό 베이슀둜 μΆ”κ°€, ꡬ좕, μ „μ²˜λ¦¬ μ§„ν–‰ν•œ 23.5k λ°μ΄ν„°λ‘œ νŠœλ‹ν•˜μ˜€μŠ΅λ‹ˆλ‹€. 데이터셋은 instruction, input, output 으둜 κ΅¬μ„±λ˜μ–΄ 있으며 tuning λͺ©ν‘œμ— 맞게 말투 μˆ˜μ •ν•˜μ˜€μŠ΅λ‹ˆλ‹€. 도메인 정보에 λŒ€ν•œ 데이터 μΆ”κ°€ν•˜μ˜€μŠ΅λ‹ˆλ‹€.

Training & Result

Training Procedure

LoRA와 SFT Trainer 방식을 μ‚¬μš©ν•˜μ˜€μŠ΅λ‹ˆλ‹€.

Training Hyperparameters

  • Training regime: bf16 mixed precision
  r=32,
  lora_alpha=64,  #  QLoRA : alpha = r/2 // LoRA : alpha =r*2
  lora_dropout=0.05, 
  target_modules=[
  "q_proj",
  "k_proj",
  "v_proj",
  "o_proj",
  "gate_proj",
  "up_proj",
  "down_proj",
  ],  # νƒ€κ²Ÿ λͺ¨λ“ˆ

Result

https://github.com/lucide99/Chatbot_FutFut

Environment

L4 GPU

Downloads last month
4
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train mintaeng/big_fut_final