The IPUTrainer
class provides a similar API to the 🤗 Transformers Trainer class to perform training, evaluation and prediction on Graphcore’s IPUs. It is the class used in all the example scripts.
Compared to the 🤗 Transformers Trainer class, to instantiate IPUTrainer
you need to create:
IPUTrainingArguments
, which allows you to customize the behaviour of the trainerIPUConfig
, which defines IPU-specific parametersThere is an equivalent IPUSeq2SeqTrainer
class for seq2seq models which requires you to define:
IPUSeq2SeqTrainingArguments
IPUConfig
Most example scripts in /examples
and Jupyter notebooks in /notebooks
use IPUTrainer
and IPUTrainingArguments
.
To see how to use IPUSeq2SeqTrainer
and IPUSeq2SeqTrainingArguments
look at:
Jupyter notebooks
Example scripts
( model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module] = None ipu_config: IPUConfig = None args: IPUTrainingArguments = None data_collator: typing.Optional[DataCollator] = None eval_data_collator: typing.Optional[DataCollator] = None train_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None eval_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None tokenizer: typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None model_init: typing.Callable[[], transformers.modeling_utils.PreTrainedModel] = None compute_metrics: typing.Union[typing.Callable[[transformers.trainer_utils.EvalPrediction], typing.Dict], NoneType] = None callbacks: typing.Optional[typing.List[transformers.trainer_callback.TrainerCallback]] = None optimizers: typing.Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None) preprocess_logits_for_metrics: typing.Union[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor], NoneType] = None force_to_pipelined: bool = False )
Parameters
transformers.PreTrainedModel
or torch.nn.Module
, optional) —
The model to train, evaluate or use for predictions. If not provided, a model_init
function must be passed.
IPUTrainer is optimized to work with the transformers.PreTrainedModel
class provided by the 🤗 Transformers
library. You can still use your own models defined as torch.nn.Module
as long as they work in the same way as
the 🤗 Transformers models.
output_dir
set to a
directory named tmp_trainer in the current directory if not
provided.
transformers.data.data_collator.DataCollator
, optional) —
The function to use to form a batch from a list of elements of
train_dataset
or eval_dataset
. Will default to
transformers.data.default_data_collator
if no tokenizer
is
provided, or an instance of
DataCollatorWithPadding
otherwise.
torch.utils.data.Dataset
or torch.utils.data.IterableDataset
, optional) —
The dataset to use for training. If it is a Dataset
dataset, the columns not accepted by the
model.forward()
method are automatically removed.
Note that if it’s a torch.utils.data.IterableDataset
dataset with
some randomization and you are training in a distributed fashion,
your iterable dataset should either use an internal attribute
generator
that is a torch.Generator
object for the randomization that
must be identical on all processes (and the trainer will manually
set the seed of this generator
at each epoch) or have a
set_epoch()
method that internally sets the seed of the RNGs used.
torch.utils.data.Dataset
, Dict[str, torch.utils.data.Dataset
]), optional) —
The dataset to use for evaluation. If it is a Dataset dataset, the columns not accepted by the
model.forward()
method are automatically removed. If it is a dictionary, it will evaluate on each
dataset prepending the dictionary key to the metric name.
transformers.PreTrainedTokenizerBase
, optional) —
The tokenizer used to preprocess the data. If provided, it will be
used to automatically pad the inputs to the maximum length when
batching inputs, and it will be saved along the model to make it
easier to rerun an interrupted training or reuse the fine-tuned
model.
Callable[[], transformers.PreTrainedModel]
, optional) —
A function that instantiates the model to be used. If provided, each call to IPUTrainer.train() will start
from a new instance of the model as given by this function.
The function may have no arguments, or a single argument containing the optuna/Ray Tune/SigOpt trial object, to be able to choose different architectures according to hyper parameters (such as layer count, sizes of inner layers and dropout probabilities). Note: this feature is not supported for now.
Callable[[~transformers.trainer_utils.EvalPrediction], Dict]
, optional) —
The function that will be used to compute metrics at evaluation. Must take a
EvalPrediction
and return a dictionary of strings to metric values.
transformers.trainer_callback.TrainerCallback
, optional) —
A list of callbacks to customize the training loop. Will add those to the list of default callbacks
detailed in here.
If you want to remove one of the default callbacks used, use the Trainer.remove_callback
method.
Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]
, optional) — A tuple
containing the optimizer and the scheduler to use. Will default to an instance of poptorch.AdamW
on your model
and a scheduler given by get_linear_schedule_with_warmup
controlled by args
.
Callable[[torch.Tensor, torch.Tensor], torch.Tensor]
, optional) —
A function that preprocesses the logits right before caching them at each evaluation step. Must take two
tensors, the logits and the labels, and return the logits once processed as desired. The modifications made
by this function will be reflected in the predictions received by compute_metrics
.
Note that the labels (second parameter) will be None
if the dataset does not have them.
IPUTrainer
is a simple but feature-complete training and evaluation
loop on Graphcore IPUs for PyTorch, optimized for 🤗 Transformers.
( callback )
Adds a callback to the current list of ~transformer.TrainerCallback
.
( model: PoplarExecutor sample_batch: typing.Union[typing.Dict[str, torch.Tensor], typing.Tuple[torch.Tensor]] log: bool = False )
Parameters
poptorch.PoplarExecutor
) —
The model to compile (already wrapped).
Dict[str, torch.Tensor]
or Tuple[torch.Tensor]
) —
The inputs to use for the compilation. This will set the input shapes that the compiled model can accept.
bool
, optional, defaults to False
) —
If True
, logs that the compilation is in progress.
Compiles the model with PopTorch.
( model inputs return_outputs = False )
Computes the loss on a batch of training inputs.
By default, all models return the loss in the first element.
Subclass and override for custom behavior.
( language: typing.Optional[str] = None license: typing.Optional[str] = None tags: typing.Union[str, typing.List[str], NoneType] = None model_name: typing.Optional[str] = None finetuned_from: typing.Optional[str] = None tasks: typing.Union[str, typing.List[str], NoneType] = None dataset_tags: typing.Union[str, typing.List[str], NoneType] = None dataset: typing.Union[str, typing.List[str], NoneType] = None dataset_args: typing.Union[str, typing.List[str], NoneType] = None )
Parameters
str
, optional) —
The language of the model (if applicable)
str
, optional) —
The license of the model. Will default to the license of the pretrained model used, if the original
model given to IPUTrainer comes from a repo on the Hub.
str
or List[str]
, optional) —
Some tags to be included in the metadata of the model card.
str
, optional) —
The name of the model.
str
, optional) —
The name of the model used to fine-tune this one (if applicable). Will default to the name of the repo
of the original model given to IPUTrainer (if it comes from the Hub).
str
or List[str]
, optional) —
One or several task identifiers, to be included in the metadata of the model card.
str
or List[str]
, optional) —
One or several dataset tags, to be included in the metadata of the model card.
str
or List[str]
, optional) —
One or several dataset identifiers, to be included in the metadata of the model card.
str
or List[str]
, optional) —
One or several dataset arguments, to be included in the metadata of the model card.
Creates a draft of a model card using the information available to IPUTrainer.
Sets up the optimizer.
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the
trainer’s init through optimizers
, or subclass and override this method in a subclass.
Sets up the optimizer and the learning rate scheduler.
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the
trainer’s init through optimizers
, or subclass and override this method (or create_optimizer
and/or
create_scheduler
) in a subclass.
( num_training_steps: int optimizer: Optimizer = None )
Sets up the scheduler. The optimizer of the trainer must have been set up either before this method is called or is passed as an argument.
( eval_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'eval' )
Parameters
Dataset
, optional) —
Pass a dataset if you wish to override self.eval_dataset
. If it is a Dataset dataset, the columns
not accepted by the model.forward()
method are automatically removed. It must implement the __len__
method.
Lst[str]
, optional) —
A list of keys in the output of your model (if it is a dictionary) that should be ignored when
gathering predictions.
str
, optional, defaults to "eval"
) —
An optional prefix to be used as the metrics key prefix. For example the metric “bleu” will be named
“eval_bleu” if the prefix is “eval” (default)
Runs an evaluation and returns metrics.
The calling script will be responsible for providing a method to compute the metrics, as they are task-dependent
(pass it to the init compute_metrics
argument).
You can also subclass and override this method to inject custom behavior.
( dataloader: DataLoader description: str prediction_loss_only: typing.Optional[bool] = None ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'eval' )
Parameters
poptorch.DataLoader
) —
The dataset to be used.
str
) —
The description of what is being run.
bool
) —
If True
, only returns the loss. If False
, returns loss,
logits and labels (if present).
Lst[str]
, optional) —
A list of keys in the output of your model (if it is a
dictionary) that should be ignored when gathering predictions.
str
, optional, defaults to "eval"
) —
An optional prefix to be used as the metrics key prefix. For
example the metric “bleu” will be named “eval_bleu” if the
prefix is “eval” (default).
Prediction/evaluation loop, shared by IPUTrainer.evaluate() and IPUTrainer.predict().
Works both with or without labels.
(
inputs: typing.Dict[str, typing.Union[torch.Tensor, typing.Any]]
)
→
int
For models that inherit from transformers.PreTrainedModel
, uses that class’s floating_point_ops
method to compute the number of
floating point operations for every backward and every forward pass.
If using another model, either implement a floating_point_ops
method in the model or subclass and override this method.
( eval_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None )
Parameters
torch.utils.data.Dataset
, optional) —
If provided, will override self.eval_dataset
. If it is a Dataset dataset, the columns not accepted
by the model.forward()
method are automatically removed. It must implement __len__
.
Returns the evaluation poptorch.DataLoader
.
Subclass and override this method if you want to inject some custom behavior.
( test_dataset: Dataset )
Parameters
torch.utils.data.Dataset
, optional) —
The test dataset to use. If it is a Dataset dataset, the columns not accepted by the
model.forward()
method are automatically removed. It must implement __len__
.
Returns the test poptorch.DataLoader
.
Subclass and override this method if you want to inject some custom behavior.
Returns the training poptorch.DataLoader
.
Will not use a sampler if train_dataset
does not implement __len__
and will use a random sampler (adapted to distributed
training if necessary) otherwise.
Subclass and override this method if you want to inject some custom behavior.
( at_init: bool = False )
Initializes a Git repo in self.args.hub_model_id
.
( logs: typing.Dict[str, float] )
Log logs
on the various objects watching the training.
Subclass and override this method to inject custom behavior.
( split metrics )
Log metrics in a specially formatted way
Under distributed environment this is done only for a process with rank 0.
Notes on memory reports:
In order to get memory usage report you need to install psutil
. You can do that with pip install psutil
.
Now when this method is run, you will see a report that will include: :
init_mem_cpu_alloc_delta = 1301MB
init_mem_cpu_peaked_delta = 154MB
init_mem_gpu_alloc_delta = 230MB
init_mem_gpu_peaked_delta = 0MB
train_mem_cpu_alloc_delta = 1345MB
train_mem_cpu_peaked_delta = 0MB
train_mem_gpu_alloc_delta = 693MB
train_mem_gpu_peaked_delta = 7MB
Understanding the reports:
train__
, tells you which stage the metrics are for. Reports starting with init_
will be added to the first stage that gets run. So that if only evaluation is run, the memory usage for the
__init__
will be reported along with the eval_
metrics.cpu
or gpu
, tells you whether it’s the general RAM or the gpu0 memory
metric.*_alloc_delta
- is the difference in the used/allocated memory counter between the end and the start of the
stage - it can be negative if a function released more memory than it allocated.*_peaked_delta
- is any extra memory that was consumed and then freed - relative to the current allocated
memory counter - it is never negative. When you look at the metrics of any stage you add up alloc_delta
+
peaked_delta
and you know how much memory was needed to complete that stage.The reporting happens only for process of rank 0 and gpu 0 (if there is a gpu). Typically this is enough since the main process does the bulk of work, but it could be not quite so if model parallel is used and then other GPUs may use a different amount of gpu memory. This is also not the same under DataParallel where gpu0 may require much more memory than the rest since it stores the gradient and optimizer states for all participating GPUS. Perhaps in the future these reports will evolve to measure those too.
The CPU RAM metric measures RSS (Resident Set Size) includes both the memory which is unique to the process and the memory shared with other processes. It is important to note that it does not include swapped out memory, so the reports could be imprecise.
The CPU peak memory is measured using a sampling thread. Due to python’s GIL it may miss some of the peak memory if
that thread didn’t get a chance to run when the highest memory was used. Therefore this report can be less than
reality. Using tracemalloc
would have reported the exact peak memory, but it doesn’t report memory allocations
outside of python. So if some C++ CUDA extension allocated its own memory it won’t be reported. And therefore it
was dropped in favor of the memory sampling approach, which reads the current process memory usage.
The GPU allocated and peak memory reporting is done with torch.cuda.memory_allocated()
and
torch.cuda.max_memory_allocated()
. This metric reports only “deltas” for pytorch-specific allocations, as
torch.cuda
memory management system doesn’t track any memory allocated outside of pytorch. For example, the very
first cuda call typically loads CUDA kernels, which may take from 0.5 to 2GB of GPU memory.
Note that this tracker doesn’t account for memory allocations outside of Trainer
’s __init__
, train
,
evaluate
and predict
calls.
Because evaluation
calls may happen during train
, we can’t handle nested invocations because
torch.cuda.max_memory_allocated
is a single counter, so if it gets reset by a nested eval call, train
’s tracker
will report incorrect info. If this pytorch issue gets resolved
it will be possible to change this class to be re-entrant. Until then we will only track the outer level of
train
, evaluate
and predict
methods. Which means that if eval
is called during train
, it’s the latter
that will account for its memory usage and that of the former.
This also means that if any other tool that is used along the Trainer
calls
torch.cuda.reset_peak_memory_stats
, the gpu peak memory stats could be invalid. And the Trainer
will disrupt
the normal behavior of any such tools that rely on calling torch.cuda.reset_peak_memory_stats
themselves.
For best performance you may want to consider turning the memory profiling off for production runs.
(
metrics: typing.Dict[str, float]
)
→
metrics (Dict[str, float]
)
Reformat Trainer metrics values to a human-readable format
Returns the number of samples in a poptorch.DataLoader
object by accessing its dataset. When
poptorch.DataLoader.dataset
does not exist or has no length, returns the best estimate best it can.
(
callback
)
→
~transformer.TrainerCallback
Parameters
type
or ~transformer.TrainerCallback
) —
A ~transformer.TrainerCallback
class or an instance of ~transformer.TrainerCallback
. In the
first case, will pop the first member of that class found in the list of callbacks.
Returns
~transformer.TrainerCallback
The callback was removed, if found.
Removes a callback from the current list of ~transformer.TrainerCallback
and returns it.
If the callback is not found, returns None
(and no error is raised).
( test_dataset: Dataset ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'test' )
Parameters
Dataset
) —
Dataset to run the predictions on. If it is an datasets.Dataset
dataset, the columns not accepted by the
model.forward()
method are automatically removed. Has to implement the method __len__
Lst[str]
, optional) —
A list of keys in the output of your model (if it is a dictionary) that should be ignored when
gathering predictions.
str
, optional, defaults to "test"
) —
An optional prefix to be used as the metrics key prefix. For example the metric “bleu” will be named
“test_bleu” if the prefix is “test” (default)
Returns predictions and potential metrics.
Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method
will also return metrics, like in evaluate()
.
If your predictions or labels have different sequence lengths (for instance because you’re doing dynamic padding in a token classification task) the predictions will be padded (on the right) to allow for concatenation into one array. The padding index is -100.
Returns: NamedTuple A namedtuple with the following keys:
np.ndarray
): The predictions on test_dataset
.np.ndarray
, optional): The labels (if the dataset contained some).Dict[str, float]
, optional): The dictionary of potential metrics (if the dataset contained
labels).( model: PoplarExecutor inputs: typing.Dict[str, typing.Union[torch.Tensor, typing.Any]] prediction_loss_only: bool ignore_keys: typing.Optional[typing.List[str]] = None is_last_batch: bool = False )
Parameters
poptorch.PoplarExecutor
) —
The model to evaluate.
Dict[str, Union[torch.Tensor, Any]]
) —
The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model.
Most models expect the targets under the argument labels
.
Check your model’s documentation for all accepted arguments.
bool
) —
If True
, only returns the loss. If False
, returns loss,
logits and labels (if present).
Lst[str]
, optional) —
A list of keys in the output of your model (if it is a
dictionary) that should be ignored when gathering predictions.
Performs an evaluation step.
Subclass and override to inject custom behavior.
( commit_message: typing.Optional[str] = 'End of training' blocking: bool = True **kwargs )
Parameters
str
, optional, defaults to "End of training"
) —
Message for the commit.
bool
, optional, defaults to True
) —
If True
(default), the function only returns when the git push
command has completed. If False
, returns immediately.
kwargs —
Additional keyword arguments passed along to ~Trainer.create_model_card
.
Uploads self.model and self.tokenizer to the 🤗 Models Hub on the repo self.args.hub_model_id.
(
optimizer: Optimizer
model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module]
pipelined_model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module]
)
→
poptorch.optim.Optimizer
Parameters
torch.optim.Optimizer
) —
The PyTorch optimizer to convert.
[transformers.PreTrainedModel]
or torch.nn.Module
) —
The original model the optimizer has parameter references to.
[transformers.PreTrainedModel] or
torch.nn.Module`) —
The pipelined version of the model. Its parameters will be used by the PopTorch optimizer.
Returns
poptorch.optim.Optimizer
The converted PopTorch optimizer.
Converts a PyTorch optimizer to a PopTorch optimizer.
( callback )
Removes a callback from the current list of ~transformer.TrainerCallback
.
( split metrics combined = True )
Save metrics into a json file for that split, e.g. train_results.json
.
Under distributed environment this is done only for a process with rank 0.
To understand the metrics please read the docstring of ~Trainer.log_metrics
. The only difference is that raw
unformatted numbers are saved in the current method.
Saves the model, so you can reload it using from_pretrained()
.
Will only save the model from the main process.
Saves the Trainer state, since Trainer.save_model saves only the tokenizer with the model
Under distributed environment this is done only for a process with rank 0.
( resume_from_checkpoint: typing.Union[bool, str, NoneType] = None trial: typing.Union[ForwardRef('optuna.Trial'), typing.Dict[str, typing.Any]] = None ignore_keys_for_eval: typing.Optional[typing.List[str]] = None **kwargs )
Parameters
str
or bool
, optional) —
Indicates that training will resume from the model, optimizer or
scheduler states loaded here. If str
, local path to a saved
checkpoint as saved by a previous instance of IPUTrainer. If
bool
and True
, load the last checkpoint in args.output_dir
as saved by a previous instance of IPUTrainer.
optuna.Trial
or Dict[str, Any]
, optional) —
The trial run or the hyperparameter dictionary for a
hyperparameter search. Note: Feature not supported.
List[str]
, optional) —
A list of keys in the output of your model (if it is a
dictionary) that should be ignored when gathering predictions
for evaluation during the training.
kwargs —
Additional keyword arguments used to hide deprecated arguments.
Main training entry point.
(
model: PoplarExecutor
inputs: typing.Dict[str, typing.Union[torch.Tensor, typing.Any]]
)
→
torch.Tensor
Parameters
poptorch.PoplarExecutor
) —
The model to train.
Dict[str, Union[torch.Tensor, Any]]
) —
The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
argument labels
. Check your model’s documentation for all accepted arguments.
Returns
torch.Tensor
The tensor with the training loss on this batch.
Performs a training step on a batch of inputs.
Subclass and override to inject custom behavior.
(
model: typing.Union[transformers.modeling_utils.PreTrainedModel, poptorch._poplar_executor.PoplarExecutor]
training = True
)
→
poptorch.PoplarExecutor
Wraps a model for PopTorch, either for training or for inference.
( model: typing.Union[ForwardRef('PreTrainedModel'), torch.nn.modules.module.Module] = None ipu_config: IPUConfig = None args: IPUTrainingArguments = None data_collator: typing.Optional[ForwardRef('DataCollator')] = None eval_data_collator: typing.Optional[ForwardRef('DataCollator')] = None train_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None eval_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None tokenizer: typing.Optional[ForwardRef('PreTrainedTokenizerBase')] = None model_init: typing.Callable[[], ForwardRef('PreTrainedModel')] = None compute_metrics: typing.Union[typing.Callable[[ForwardRef('EvalPrediction')], typing.Dict], NoneType] = None callbacks: typing.Optional[typing.List[ForwardRef('TrainerCallback')]] = None optimizers: typing.Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None) preprocess_logits_for_metrics: typing.Union[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor], NoneType] = None force_to_pipelined: bool = False )
The IPUSeq2SeqTrainer class is used to train seq2seq models. Its behaviour is exactly the same as IPUTrainer except that it expects IPUSeq2SeqTrainingArguments
instead of IPUTrainingArguments.
( eval_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'eval' **gen_kwargs )
Parameters
Dataset
, optional) —
Pass a dataset if you wish to override self.eval_dataset
. If it is an datasets.Dataset
, columns not
accepted by the model.forward()
method are automatically removed. It must implement the __len__
method.
List[str]
, optional) —
A list of keys in the output of your model (if it is a dictionary) that should be ignored when
gathering predictions.
str
, optional, defaults to "eval"
) —
An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named
“eval_bleu” if the prefix is "eval"
(default)
int
, optional) —
The maximum target length to use when predicting with the generate method.
int
, optional) —
Number of beams for the beam search that will be used when predicting with the generate method. 1 means no
beam search.
gen_kwargs —
Additional generate
specific kwargs.
Run evaluation and returns metrics.
The calling script will be responsible for providing a method to compute metrics, as they are task-dependent
(pass it to the init compute_metrics
argument).
You can also subclass and override this method to inject custom behavior.
( test_dataset: Dataset ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'test' **gen_kwargs )
Parameters
Dataset
) —
Dataset to run the predictions on. If it is a datasets.Dataset
dataset, the columns not accepted by the
model.forward()
method are automatically removed. Has to implement the method __len__
List[str]
, optional) —
A list of keys in the output of your model (if it is a dictionary) that should be ignored when
gathering predictions.
str
, optional, defaults to "eval"
) —
An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named
“eval_bleu” if the prefix is "eval"
(default)
int
, optional) —
The maximum target length to use when predicting with the generate method.
int
, optional) —
Number of beams for the beam search that will be used when predicting with the generate method. 1 means no
beam search.
gen_kwargs —
Additional generate
specific kwargs.
Runs prediction and returns predictions and potential metrics.
Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method
will also return metrics, like evaluate()
.
If your predictions or labels have different sequence lengths (for instance because you’re doing dynamic padding in a token classification task) the predictions will be padded (on the right) to allow for concatenation into one array. The padding index is -100.
Returns: NamedTuple A namedtuple with the following keys:
np.ndarray
):
The predictions on test_dataset
.np.ndarray
, optional):
The labels (if the dataset contained some).Dict[str, float]
, optional):
The potential dictionary of metrics (if the dataset contained
labels).( output_dir: str overwrite_output_dir: bool = False do_train: bool = False do_eval: bool = False do_predict: bool = False evaluation_strategy: IntervalStrategy = 'no' prediction_loss_only: bool = False per_device_train_batch_size: int = 1 per_device_eval_batch_size: int = 1 gradient_accumulation_steps: int = None eval_delay: typing.Optional[float] = 0 learning_rate: float = 5e-05 weight_decay: float = 0.0 adam_beta1: float = 0.9 adam_beta2: float = 0.999 adam_epsilon: float = 1e-08 max_grad_norm: float = 1.0 num_train_epochs: float = 3.0 max_steps: int = -1 lr_scheduler_type: SchedulerType = 'linear' warmup_ratio: float = 0.0 warmup_steps: int = 0 log_level: typing.Optional[str] = 'passive' logging_dir: typing.Optional[str] = None logging_strategy: IntervalStrategy = 'steps' logging_first_step: bool = False logging_steps: int = 500 logging_nan_inf_filter: bool = False save_strategy: IntervalStrategy = 'steps' save_steps: int = 500 save_total_limit: typing.Optional[int] = None seed: int = 42 data_seed: typing.Optional[int] = None debug: str = '' dataloader_drop_last: bool = False eval_steps: int = None dataloader_num_workers: int = 0 past_index: int = -1 run_name: typing.Optional[str] = None disable_tqdm: typing.Optional[bool] = None remove_unused_columns: typing.Optional[bool] = True label_names: typing.Optional[typing.List[str]] = None load_best_model_at_end: typing.Optional[bool] = False metric_for_best_model: typing.Optional[str] = None greater_is_better: typing.Optional[bool] = None ignore_data_skip: bool = False label_smoothing_factor: float = 0.0 group_by_length: bool = False length_column_name: typing.Optional[str] = 'length' report_to: typing.Optional[typing.List[str]] = 'none' dataloader_pin_memory: bool = True skip_memory_metrics: bool = True push_to_hub: bool = False resume_from_checkpoint: typing.Optional[str] = None hub_model_id: str = None hub_strategy: HubStrategy = 'every_save' hub_token: str = None hub_private_repo: bool = False gradient_checkpointing: bool = False include_inputs_for_metrics: bool = False push_to_hub_model_id: str = None push_to_hub_organization: str = None push_to_hub_token: str = None ipu_config_name: typing.Optional[str] = None n_ipu: typing.Optional[int] = None fp32: bool = False lamb: bool = False lamb_no_bias_correction: bool = False loss_scaling: typing.Optional[float] = None auto_loss_scaling: bool = False dataloader_mode: str = 'sync' compile_only: bool = False ipu_config_overrides: typing.Optional[str] = None pad_on_batch_axis: bool = False )
Parameters
str
) —
The output directory where the model predictions and checkpoints will be written.
bool
, optional, defaults to False
) —
If True
, overwrites the contents of the output directory. Use this
to continue training if output_dir
points to a checkpoint
directory.
bool
, optional, defaults to False
) —
If True
, runs training. This argument is not directly used by Trainer
. It’s intended to be used
by your training/evaluation scripts instead. See the example
scripts for more details.
bool
, optional) —
If True
, runs evaluation on the validation set. Will be set to True
if evaluation_strategy
is
different from "no"
. This argument is not directly used by Trainer
. It’s intended to be used by your
training/evaluation scripts instead. See the example
scripts for more details.
bool
, optional, defaults to False
) —
If True
, runs predictions on the test set. This argument is not directly used by Trainer
. It’s
intended to be used by your training/evaluation scripts instead. See the example
scripts for more details.
str
or ~trainer_utils.IntervalStrategy
, optional, defaults to "no"
) —
The evaluation strategy to adopt during training. Possible values are:
"no"
: No evaluation is done during training."steps"
: Evaluation is done (and logged) every eval_steps
."epoch"
: Evaluation is done at the end of each epoch.bool
, optional, defaults to False
) —
If True
, only returns the loss, when performing evaluation and generating predictions.
int
, optional, defaults to 1) —
The batch size per IPU for training.
int
, optional, defaults to 1) —
The batch size per IPU for evaluation.
int
, optional, defaults to 1) —
Number of updates steps to accumulate the gradients for, before performing a backward/update pass.
When using gradient accumulation, one step is counted as one step with a backward pass. Therefore, logging,
evaluation and saving will be conducted every gradient_accumulation_steps * xxx_step
training examples.
float
, optional) —
The number of epochs or steps to wait before the first evaluation can be performed, depending on the evaluation strategy.
float
, optional, defaults to 0.9) —
The beta1 hyperparameter for the AdamW
optimizer.
float
, optional, defaults to 0.999) —
The beta2 hyperparameter for the AdamW
optimizer.
float
, optional, defaults to 1e-8) —
The epsilon hyperparameter for the AdamW
optimizer.
float
, optional, defaults to 1.0) —
Maximum gradient norm (for gradient clipping).
float
, optional, defaults to 3.0) —
Total number of training epochs to perform (if not an integer, training will continue for the indicated fraction of the last epoch before stopping).
int
, optional, defaults to -1) —
If set to a positive number, the total number of training steps to perform. Overrides num_train_epochs
.
In the case of using a finite iterable dataset, the training may stop before reaching the set number of steps
when all data is exhausted
str
or SchedulerType
, optional, defaults to "linear"
) —
The type of scheduler to use. See the documentation of SchedulerType
for all possible values.
float
, optional, defaults to 0.0) —
The fraction of total steps to be used to linear warmup. Ranges from 0 to learning_rate
.
int
, optional, defaults to 0) —
Number of steps used for a linear warmup. Ranges from 0 to learning_rate
*total steps. Overrides any effect of warmup_ratio
.
str
, optional, defaults to passive
) —
Logger log level to use on the main process. Possible choices are the log levels as strings: ‘debug’,
‘info’, ‘warning’, ‘error’ and ‘critical’, plus a ‘passive’ level which lets the application set the level.
str
, optional) —
TensorBoard log directory. Will default to
*output_dir/runs/CURRENT_DATETIME_HOSTNAME*.
str
or ~trainer_utils.IntervalStrategy
, optional, defaults to "steps"
) —
The logging strategy to adopt during training. Possible values are:
"no"
: No logging is done during training."epoch"
: Logging is done at the end of each epoch."steps"
: Logging is done every logging_steps
.bool
, optional, defaults to False
) —
If True
, logs and evaluates the first global_step
.
int
, optional, defaults to 500) —
Number of update steps between two logs if logging_strategy="steps"
.
bool
, optional, defaults to True
) —
If True
, the loss of every step that is nan
or inf
is filtered and the average loss of the current logging window is taken instead.
logging_nan_inf_filter
only influences the logging of loss values
and it does not change the behavior of how the gradient is computed
or applied to the model.
str
or ~trainer_utils.IntervalStrategy
, optional, defaults to "steps"
) —
The checkpoint save strategy to adopt during training. Possible values are:
"no"
: No save during training."epoch"
: Save at the end of each epoch."steps"
: Save every save_steps
.int
, optional, defaults to 500) —
Number of update steps before two checkpoint saves if save_strategy="steps"
.
int
, optional) —
If a value is passed, will limit the total number of checkpoints. Deletes the older checkpoints in
output_dir
.
int
, optional, defaults to 42) —
Random seed that will be set at the beginning of training. To ensure reproducibility across runs, use the
~Trainer.model_init
function to instantiate the model if it has some randomly initialized parameters.
int
, optional) —
Random seed to be used with data samplers. If not set, random generators for data sampling will use the
same seed as seed
. This can be used to ensure reproducibility of data sampling, independent of the model
seed.
bool
, optional, defaults to False
) —
If True
, drops the last incomplete batch (if the length of the dataset is not divisible by the batch size).
int
, optional) —
Number of update steps between two evaluations if evaluation_strategy="steps"
. Will default to the same
value as logging_steps
if not set.
int
, optional, defaults to 0) —
Number of subprocesses to use for data loading (PyTorch only). 0 means that the data will be loaded in the
main process.
int
, optional, defaults to -1) —
Some models like TransformerXL or XLNet can make use of
past hidden states for their predictions. If this argument is set to a positive int, Trainer
will
use the corresponding output (usually index 2) as the past state and feed it to the model at the next
training step under the keyword argument mems
.
str
, optional) —
A descriptor for the run. Typically used for WandB and
MLflow logging.
bool
, optional) —
If True
, disables the tqdm progress bars and table of metrics produced by
~notebook.NotebookTrainingTracker
in Jupyter Notebooks. Will default to True
if the logging level is
set to warn or lower (default), False
otherwise.
bool
, optional, defaults to True
) —
If True
, automatically removes the columns unused by the model forward method.
List[str]
, optional) —
The list of keys in your dictionary of inputs that correspond to the labels.
Will eventually default to ["labels"]
except if the model used is one of the XxxForQuestionAnswering
in
which case it will default to ["start_positions", "end_positions"]
.
bool
, optional, defaults to False
) —
If True
, loads the best model found during training at the end of training.
When set to True
, the parameter save_strategy
needs to be the same as evaluation_strategy
, and in
the case it is “steps”, save_steps
must be a round multiple of eval_steps
.
str
, optional) —
Use in conjunction with load_best_model_at_end
to specify the metric for comparing two different
models. Must be the name of a metric returned by the evaluation with or without the prefix "eval_"
. Will
default to "loss"
if unspecified and load_best_model_at_end=True
(to use the evaluation loss).
If you set this parameter, greater_is_better
will default to True
. Don’t forget to set it to False
if
your metric is better when lower.
bool
, optional) —
Use in conjunction with load_best_model_at_end
and metric_for_best_model
to specify if better models
should have a higher metric or not. Will default to:
True
if metric_for_best_model
is set to a value that isn’t "loss"
or "eval_loss"
.False
if metric_for_best_model
is not set, or set to "loss"
or "eval_loss"
.bool
, optional, defaults to False
) —
When resuming training, whether or not to skip the epochs and batches to get the data loading at the same
stage as in the previous training. If set to True
, the training will begin faster (as that skipping step
can take a long time) but will not yield the same results as the interrupted training would have.
float
, optional, defaults to 0.0) —
The label smoothing factor to use. Zero means no label smoothing, otherwise the underlying onehot-encoded
labels are changed from 0s and 1s to label_smoothing_factor/num_labels
and 1 - label_smoothing_factor + label_smoothing_factor/num_labels
respectively.
str
or list of ~debug_utils.DebugOption
, optional, defaults to ""
) —
Enable one or more debug features. This is an experimental feature.
Possible options are:
"underflow_overflow"
: detects overflow in model’s input/outputs and reports the last frames that led to
the eventThe options should be separated by whitespaces.
str
or training_args.OptimizerNames
, optional, defaults to "adamw_hf"
) —
The optimizer to use: adamw_hf, adamw_torch, adamw_apex_fused, or adafactor.
Note: currently not supported.
bool
, optional, defaults to False
) —
If True
, replaces AdamW with LAMB.
bool
, optional, defaults to False
) —
If True
, replaces AdamW with LAMB without bias correction.
bool
, optional, defaults to False
) —
If True
, groups together samples of roughly the same length in the training dataset (to minimize
padding applied and be more efficient). Only useful if applying dynamic padding.
str
, optional, defaults to "length"
) —
The column name for precomputed lengths. If the column exists, grouping by length will use these values rather
than computing them on training startup. Ignored unless group_by_length
is True
and the dataset is an
instance of Dataset
.
str
or List[str]
, optional, defaults to "all"
) —
The list of integrations to report the results and logs to. Supported platforms are "azure_ml"
,
"comet_ml"
, "mlflow"
, "neptune"
, "tensorboard"
and "wandb"
. Use "all"
to report to all
integrations installed, "none"
for no integrations.
bool
, optional, defaults to True
) —
If True
, pins memory in data loaders. Will default to True
.
bool
, optional, defaults to True
) —
If True
, skips adding of memory profiler reports to metrics. This is skipped by default because it slows down the training and evaluation.
bool
, optional, defaults to False
) —
If True
, pushes the model to the Hub every time the model is saved. If this is activated,
output_dir
will begin a Git directory synced with the repo (determined by hub_model_id
) and the content
will be pushed each time a save is triggered (depending on your save_strategy
). Calling
~Trainer.save_model
will also trigger a push.
If output_dir
exists, it needs to be a local clone of the repository to which the Trainer
instance will be
pushed.
str
, optional) —
The path to a folder with a valid checkpoint for your model. This argument is not directly used by
Trainer
. It’s intended to be used by your training/evaluation scripts instead. See the example
scripts for more details.
str
, optional) —
The name of the repository to keep in sync with the local output_dir. It can be a simple model ID in
which case the model will be pushed in your namespace. Otherwise it should be the whole repository name,
for instance "user_name/model"
, which allows you to push to an organization you are a member of with
"organization_name/model"
. Will default to user_name/output_dir_name
with output_dir_name being the
name of output_dir
.
Will default to the name of output_dir
.
str
or ~trainer_utils.HubStrategy
, optional, defaults to "every_save"
) —
Defines the scope of what is pushed to the Hub and when. Possible values are:
"end"
: push the model, its configuration, the tokenizer (if passed along to Trainer
) and a
draft of a model card when the ~Trainer.save_model
method is called."every_save"
: push the model, its configuration, the tokenizer (if passed along to Trainer
) and
a draft of a model card each time there is a model save. The pushes are asynchronous to not block
training, and if the saves are very frequent, a new push is only attempted if the previous push has completed. A last push is made with the final model at the end of training."checkpoint"
: like "every_save"
but the latest checkpoint is also pushed in a subfolder named
last-checkpoint, allowing you to resume training easily with
trainer.train(resume_from_checkpoint="last-checkpoint")
."all_checkpoints"
: like "checkpoint"
but all checkpoints are pushed like they appear in the output
folder (so you will get one checkpoint folder per folder in your final repository)str
, optional) —
The token to use to push the model to the Hub. Will default to the token in the cache folder obtained with
huggingface-cli login
.
bool
, optional, defaults to False
) —
If True
, the Hub repo will be set to private.
bool
, optional, defaults to False
) —
If True
, use gradient checkpointing to save memory at the expense of slower backward pass.
bool
, optional, defaults to False
) —
If True
, the inputs will be passed to the compute_metrics
function. This is intended for metrics
that need inputs, predictions and references for scoring calculation in the Metric
class.
Note: currently not supported.
str
, optional) —
The pretrained IPU config name or path if not the same as the model name or path.
int
, optional) —
The number of IPUs to use. Must be a power of 2 and a multiple of the number of IPUs required by your model.
bool
, optional, defaults to False
) —
If True
, uses 32-bit (full) precision instead of 16-bit.
float
, optional) —
The loss scaling factor (using a power of 2 is recommended). If using automatic loss scaling, this value will
be the initial value.
bool
, optional, defaults to False
) —
If True
, enables automatic loss scaling for half precision training.
Note: this feature is experimental.
str
, optional, defaults to "sync"
) —
The way in which data should be accessed. Possible values:
bool
, optional, defaults to False
) —
If True
, the IPUTrainer instance will only perform model compilation and stop.
str
, optional) —
Overrides some existing IPU config settings.
Example: device_iterations=4,gradient_accumulation_steps=64
bool
, optional, defaults to False
) —
Will pad each batch up to a fixed size. This ensures that the compiled model will have an input with the
proper shape, and means that dataloader_drop_last
will not have to be used during training.
IPUTrainingArguments
is the class that contains the subset of the input
arguments which relate to the training loop itself.
Using transformers.HfArgumentParser
we can turn this class into
argparse
arguments that can be specified on the command line.
Returns the log level to be used depending on whether this process is the main process of node 0, the main process of node non-0, or a non-main process.
For the main process, the log level defaults to logging.INFO
unless overridden by the log_level
argument.
For the replica processes, the log level defaults to logging.WARNING
unless overridden by the
log_level_replica
argument.
The choice between the main and replica process settings is made according to the return value of
should_log
.
Get the number of steps used for a linear warmup.
( local = True desc = 'work' )
Parameters
bool
, optional, defaults to True
) —
if True
first means process of rank 0 of each node if False
first means process of rank 0
of node rank 0 In multi-node environment with a shared filesystem you most likely will want to use
local=False
so that only the main process of the first node will do the processing. If however, the
filesystem is not shared, then the main process of each node will need to do the processing, which is
the default behavior.
str
, optional, defaults to `“work”“) —
A work description to be used in debug logs
A context manager for a torch distributed environment where one needs to run a task on the main process, while blocking replicas, and when the task is finished to release the replicas.
An example is for the datasets
`map
feature which, to be
efficient, should be run once on the main process. Upon completion,
it saves a cached version of results and which then automatically
gets loaded by the replicas.
Serializes this instance while replacing the Enum
with their values (for JSON serialization support). It obfuscates
the token values by removing their value.
Serializes this instance to a JSON string.
Sanitized serialization to use with TensorBoard HParams.