Training batch size to use
Gradient accumulation steps
Disable gradient checkpointing
Use experiment tracking
Reference model to use for DPO when not using PEFT
Set the proportion of training allocated to warming up the learning rate, which can enhance model stability and performance
at the start of training. Default is 0.1
Choose the optimizer algorithm for training the model. Different optimizers can affect the training speed and model
performance. 'adamw_torch' is used by default.
Select the learning rate scheduler to adjust the learning rate based on the number of epochs. 'linear' decreases the
learning rate linearly from the initial lr set. Default is 'linear'. Try 'cosine' for a cosine annealing schedule.
Define the weight decay rate for regularization, which helps prevent overfitting by penalizing larger weights. Default is
0.0
Set the maximum norm for gradient clipping, which is critical for preventing gradients from exploding during
backpropagation. Default is 1.0.
Toggle whether to automatically add an End Of Sentence (EOS) token at the end of texts, which can be critical for certain
types of models like language models. Only used for `default` trainer
Specify the block size for processing sequences. This is maximum sequence length or length of one block of text. Setting to
-1 determines block size automatically. Default is -1.
Set the 'r' parameter for Low-Rank Adaptation (LoRA). Default is 16.
Specify the 'alpha' parameter for LoRA. Default is 32.
Set the dropout rate within the LoRA layers to help prevent overfitting during adaptation. Default is 0.05.
Determine how often to log training progress in terms of steps. Setting it to '-1' determines logging steps automatically.
Choose how frequently to evaluate the model's performance, with 'epoch' as the default, meaning at the end of each training
epoch
Limit the total number of saved model checkpoints to manage disk usage effectively. Default is to save only the latest
checkpoint
Automatically determine the optimal batch size based on system capabilities to maximize efficiency.
Choose the precision mode for training to optimize performance and memory usage. Options are 'fp16', 'bf16', or None for
default precision. Default is None.
Choose the quantization level to reduce model size and potentially increase inference speed. Options include 'int4', 'int8',
or None. Enabling requires
Set the maximum length for the model to process in a single batch, which can affect both performance and memory usage.
Default is 1024
Specify the maximum length for prompts used in training, particularly relevant for tasks requiring initial contextual input.
Used only for `orpo` trainer.
Completion length to use, for orpo: encoder-decoder models only
Trainer type to use
Identify specific modules within the model architecture to target with adaptations or optimizations, such as LoRA. Comma
separated list of module names. Default is 'all-linear'.
Use this flag to merge PEFT adapter with the model
Use flash attention 2
Beta for DPO trainer
Apply a specific template for chat-based interactions, with options including 'tokenizer', 'chatml', 'zephyr', or None. This
setting can shape the model's conversational behavior.
Specify the padding direction for sequences, critical for models sensitive to input alignment. Options include 'left',
'right', or None