Edit model card
Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

LoraConfig arguments

config = LoraConfig(r=32, lora_alpha=64, #target_modules=".decoder.(self_attn|encoder_attn).*(q_proj|v_proj)$",#["q_proj", "v_proj"], target_modules=["q_proj", "v_proj"], lora_dropout=0.05, bias="none")

Training arguments

training_args = TrainingArguments( output_dir="temp", # change to a repo name of your choice per_device_train_batch_size=8, gradient_accumulation_steps=2, # increase by 2x for every 2x decrease in batch size learning_rate=1e-3, warmup_steps=10, max_steps=400, #1500 #evaluation_strategy="steps", fp16=True, per_device_eval_batch_size=8, #generation_max_length=128, eval_steps=100, logging_steps=25, remove_unused_columns=False, # required as the PeftModel forward doesn't have the signature of the wrapped model's forward label_names=["label"], # same reason as above )

Training procedure

The following bitsandbytes quantization config was used during training:

  • quant_method: bitsandbytes
  • load_in_8bit: True
  • load_in_4bit: False
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: fp4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float32

Framework versions

  • PEFT 0.5.0
Downloads last month
2
Inference API
Unable to determine this model’s pipeline type. Check the docs .