( hf_ds_config: typing.Any = None gradient_accumulation_steps: int = None gradient_clipping: float = None zero_stage: int = None is_train_batch_min: str = True offload_optimizer_device: bool = None offload_param_device: bool = None offload_optimizer_nvme_path: str = None offload_param_nvme_path: str = None zero3_init_flag: bool = None zero3_save_16bit_model: bool = None )
This plugin is used to integrate DeepSpeed.
( prefix = '' mismatches = None config = None must_match = True **kwargs )
Process the DeepSpeed config with the values from the kwargs.
( params lr = 0.001 weight_decay = 0 **kwargs )
Parameters
Dummy optimizer presents model parameters or param groups, this is primarily used to follow conventional training loop when optimizer config is specified in the deepspeed config file.
( optimizer total_num_steps = None warmup_num_steps = 0 **kwargs )
Parameters
torch.optim.optimizer.Optimizer
) —
The optimizer to wrap.
Dummy scheduler presents model parameters or param groups, this is primarily used to follow conventional training loop when scheduler config is specified in the deepspeed config file.
( engine )
Parameters
Internal wrapper for deepspeed.runtime.engine.DeepSpeedEngine. This is used to follow conventional training loop.
( optimizer )
Parameters
torch.optim.optimizer.Optimizer
) —
The optimizer to wrap.
Internal wrapper around a deepspeed optimizer.
( scheduler optimizers )
Parameters
torch.optim.lr_scheduler.LambdaLR
) —
The scheduler to wrap.
torch.optim.Optimizer
) —
Internal wrapper around a deepspeed scheduler.