Gaudi Configuration

In order to make the most of Gaudi, it is advised to rely on advanced features such as Habana Mixed Precision or optimized operators. You can specify which features to use in a Gaudi configuration, which will take the form of a JSON file following this template:

{
  "use_habana_mixed_precision": true/false,
  "hmp_is_verbose": true/false,
  "use_fused_adam": true/false,
  "use_fused_clip_norm": true/false,
  "hmp_bf16_ops": [
    "torch operator to compute in bf16",
    "..."
  ],
  "hmp_fp32_ops": [
    "torch operator to compute in fp32",
    "..."
  ]
}

Here is a description of each configuration parameter:

hmp_is_verbose, hmp_bf16_ops and hmp_fp32_ops will not be used if use_habana_mixed_precision is false.

You can find examples of Gaudi configurations in the Habana model repository on the Hugging Face Hub. For instance, for BERT Large we have:

{
  "use_habana_mixed_precision": true,
  "hmp_is_verbose": false,
  "use_fused_adam": true,
  "use_fused_clip_norm": true,
  "hmp_bf16_ops": [
    "add",
    "addmm",
    "bmm",
    "div",
    "dropout",
    "gelu",
    "iadd",
    "linear",
    "layer_norm",
    "matmul",
    "mm",
    "rsub",
    "softmax",
    "truediv"
  ],
  "hmp_fp32_ops": [
    "embedding",
    "nll_loss",
    "log_softmax"
  ]
}

To instantiate yourself a Gaudi configuration in your script, you can do the following

from optimum.habana import GaudiConfig

gaudi_config = GaudiConfig.from_pretrained(
    gaudi_config_name,
    cache_dir=model_args.cache_dir,
    revision=model_args.model_revision,
    use_auth_token=True if model_args.use_auth_token else None,
)

and pass it to the trainer with the gaudi_config argument.

GaudiConfig

class optimum.habana.GaudiConfig

< >

( **kwargs )