Edit model card

Update @ 2024.03.18

T3Q-ko-solar-sft-v2.0

This model is a SFT fine-tuned version of davidkim205/nox-solar-10.7b-v4

Model Developers Chihoon Lee(chlee10), T3Q

Training hyperparameters

The following hyperparameters were used during training:

  # ๋ฐ์ดํ„ฐ์…‹๊ณผ ํ›ˆ๋ จ ํšŸ์ˆ˜์™€ ๊ด€๋ จ๋œ ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ
  batch_size = 16
  num_epochs = 1
  micro_batch = 1
  gradient_accumulation_steps = batch_size // micro_batch
  
  # ํ›ˆ๋ จ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ
  cutoff_len = 4096
  lr_scheduler = 'cosine'
  warmup_ratio = 0.06 # warmup_steps = 100
  learning_rate = 4e-4
  optimizer = 'adamw_torch'
  weight_decay = 0.01
  max_grad_norm = 1.0
  
  # LoRA config(QLoRA)
  lora_r = 16
  lora_alpha = 16
  lora_dropout = 0.05
  lora_target_modules = ["gate_proj", "down_proj", "up_proj"]
  
  # Tokenizer์—์„œ ๋‚˜์˜ค๋Š” input๊ฐ’ ์„ค์ • ์˜ต์…˜
  train_on_inputs = False
  add_eos_token = False
  
  # NEFTune params
  noise_alpha: int = 5

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.13.0
  • Tokenizers 0.14.1
Downloads last month
1,775
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for chlee10/T3Q-ko-solar-sft-v2.0

Finetuned
this model

Dataset used to train chlee10/T3Q-ko-solar-sft-v2.0