Quickstart

🤗 Optimum Neuron was designed with one goal in mind: to make training and inference straightforward for any 🤗 Transformers user while leveraging the complete power of AWS Accelerators. There are two main classes one needs to know:

The TrainiumTrainer is very similar to the 🤗 Transformers Trainer, and adapting a script using the Trainer to make it work with Trainium will mostly consist in simply swapping the Trainer class for the TrainiumTrainer one. That’s how most of the example scripts were adapted from their original counterparts.

modifications:

from transformers import TrainingArguments
-from transformers import Trainer
+from optimum.neuron import TrainiumTrainer as Trainer
training_args = TrainingArguments(
  # training arguments...
)
# A lot of code here
# Initialize our Trainer
trainer = Trainer(
    model=model,
    args=training_args,  # Original training arguments.
    train_dataset=train_dataset if training_args.do_train else None,
    eval_dataset=eval_dataset if training_args.do_eval else None,
    compute_metrics=compute_metrics,
    tokenizer=tokenizer,
    data_collator=data_collator,
)

All Trainium instances come at least with 2 Neuron Cores. To leverage those we need to launch the training whith torchrun. Below you see and example of how to launch a training script on a trn1.2xlarge instance using a bert-base-uncased model.

torchrun --nproc_per_node=2 huggingface-neuron-samples/text-classification/run_glue.py \
--model_name_or_path bert-base-uncased \
--dataset_name philschmid/emotion \
--do_train \
--do_eval \
--bf16 True \
--per_device_train_batch_size 16 \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--output_dir ./bert-emotion