Finetuning using Huggingface
Any method of fine-tuning using Huggingface Trainers ?
#https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments
Hello, have a look at a snippet that should work:-
from transformers import TrainingArguments, Trainertraining_directory = "nli-few-shot/mnli-v2xl/"
train_args = TrainingArguments( output_dir=f'./results/{training_directory}', overwrite_output_dir=True, save_steps=10_000, save_total_limit=2, learning_rate=3e-6, per_device_train_batch_size=8, per_device_eval_batch_size=8, num_train_epochs=3, #warmup_steps=0, # 1000, warmup_ratio=0.06, #0.1, 0.06 weight_decay=0.1, #0.1, fp16=True, fp16_full_eval=True, seed=42, prediction_loss_only=True, )
Any particular issues you are facing?
I wanted to fine-tune this on lora. the training failed. with unknown issues.
please at least attach the code and error.