The following objects can be passed to the main Accelerator to customize how some PyTorch objects related to distributed training or mixed precision are created.
Use this object in your Accelerator to customize how torch.autocast
behaves. Please refer to the
documentation of this context manager for more
information on each argument.
( dim: int = 0 broadcast_buffers: bool = True bucket_cap_mb: int = 25 find_unused_parameters: bool = False check_reduction: bool = False gradient_as_bucket_view: bool = False static_graph: bool = False )
Use this object in your Accelerator to customize how your model is wrapped in a
torch.nn.parallel.DistributedDataParallel
. Please refer to the documentation of this
wrapper for more
information on each argument.
gradient_as_bucket_view
is only available in PyTorch 1.7.0 and later versions.
static_graph
is only available in PyTorch 1.11.0 and later versions.
( backend: Literal = 'MSAMP' opt_level: Literal = 'O2' margin: int = 0 interval: int = 1 fp8_format: Literal = 'E4M3' amax_history_len: int = 1 amax_compute_algo: Literal = 'most_recent' override_linear_precision: Tuple = (False, False, False) )
Parameters
str
, optional, defaults to “msamp”) —
Which FP8 engine to use. Must be one of "msamp"
(MS-AMP) or "te"
(TransformerEngine). int
, optional, default to 0) —
The margin to use for the gradient scaling. int
, optional, default to 1) —
The interval to use for how often the scaling factor is recomputed. str
, optional, default to “E4M3”) —
The format to use for the FP8 recipe. Must be one of E4M3
or HYBRID
. int
, optional, default to 1024) —
The length of the history to use for the scaling factor computation str
, optional, default to “most_recent”) —
The algorithm to use for the scaling factor computation. Must be one of max
or most_recent
. tuple
of three bool
, optional, default to (False, False, False)
) —
Whether or not to execute fprop
, dgrad
, and wgrad
GEMMS in higher precision. str
), one of O1
, O2
. (default is O2
) —
What level of 8-bit collective communication should be used with MS-AMP. In general:all_reduce
communications are done in fp8, reducing GPU
memory usage and communication bandwidthfp8
is selected and deepspeed is enabled, will be used by default. (Not
available currently).Use this object in your Accelerator to customize the initialization of the recipe for FP8 mixed precision
training with transformer-engine
or ms-amp
.
For more information on transformer-engine
args, please refer to the API
documentation.
For more information on the ms-amp
args, please refer to the Optimization Level
documentation.
( init_scale: float = 65536.0 growth_factor: float = 2.0 backoff_factor: float = 0.5 growth_interval: int = 2000 enabled: bool = True )
Use this object in your Accelerator to customize the behavior of mixed precision, specifically how the
torch.cuda.amp.GradScaler
used is created. Please refer to the documentation of this
scaler for more information on each argument.
GradScaler
is only available in PyTorch 1.5.0 and later versions.
( backend: Optional = 'nccl' init_method: Optional = None timeout: timedelta = datetime.timedelta(seconds=1800) )
Use this object in your Accelerator to customize the initialization of the distributed processes. Please refer to the documentation of this method for more information on each argument.
Internal mixin that implements a to_kwargs()
method for a dataclass.
Returns a dictionary containing the attributes with values different from the default of this class.