With the AutoModelForCausalLMWithValueHead
class TRL supports all decoder model architectures in transformers such as GPT-2, OPT, and GPT-Neo. In addition, with AutoModelForSeq2SeqLMWithValueHead
you can use encoder-decoder architectures such as T5. TRL also requires reference models which are frozen copies of the model that is trained. With create_reference_model
you can easily create a frozen copy and also share layers between the two models to save memory.
( pretrained_model = None score_module = None supports_rm_adapter = False rm_adapter_name = None **kwargs )
A wrapper class around a (transformers.PreTrainedModel
) to be compatible with the
(~transformers.PreTrained
) class in order to keep some attributes and methods of the
(~transformers.PreTrainedModel
) class.
( pretrained_model adapter_model_id adapter_name = 'reward_model_adapter' token = None )
Add and load a reward modeling adapter. This method can only be used if the
model is a PeftModel
and if you have initialized the model with the reward_modeling_adapter_id
argument, pointing to the id of the reward modeling adapter. The latest needs also to contain the
score head in order to produce the reward.
Computes the reward score for a given input. The method has first to enable the adapter and then compute the reward score. After that the model disables the reward modeling adapter and enables the default ppo adapter again.
( pretrained_model_name_or_path *model_args **kwargs )
Parameters
str
or transformers.PreTrainedModel
) —
The path to the pretrained model or its name. list
, optional)) —
Additional positional arguments passed along to the underlying model’s
from_pretrained
method. dict
, optional) —
Additional keyword arguments passed along to the underlying model’s
from_pretrained
method. We also pre-process the kwargs to extract
the arguments that are specific to the transformers.PreTrainedModel
class and the arguments that are specific to trl models. The kwargs
also support prepare_model_for_kbit_training
arguments from
peft
library. Instantiates a new model from a pretrained model from transformers
. The
pretrained model is loaded using the from_pretrained
method of the
transformers.PreTrainedModel
class. The arguments that are specific to the
transformers.PreTrainedModel
class are passed along this method and filtered
out from the kwargs
argument.
Post initialization method. This method is called after the model is instantiated and loaded from a checkpoint. It can be used to perform additional operations such as loading the state_dict.
( *args **kwargs )
Push the pretrained model to the hub. This method is a wrapper around
transformers.PreTrainedModel.push_to_hub
. Please refer to the documentation
of transformers.PreTrainedModel.push_to_hub
for more information.
( *args **kwargs )
Save the pretrained model to a directory. This method is a wrapper around
transformers.PreTrainedModel.save_pretrained
. Please refer to the documentation
of transformers.PreTrainedModel.save_pretrained
for more information.
Return the state_dict of the pretrained model.
An autoregressive model with a value head in addition to the language model head.
This class inherits from ~trl.PreTrainedModelWrapper
and wraps a
transformers.PreTrainedModel
class. The wrapper class supports classic functions
such as from_pretrained
, push_to_hub
and generate
. To call a method of the wrapped
model, simply manipulate the pretrained_model
attribute of this class.
Class attributes:
transformers.PreTrainedModel
) — The parent class of the wrapped model. This
should be set to transformers.AutoModelForCausalLM
for this class.tuple
) — A tuple of strings that are used to identify the language model head of the
wrapped model. This is set to ("lm_head", "embed_out")
for this class but can be changed for other models
in the futuretuple
) — A tuple of strings that are used to identify the arguments that are supported
by the ValueHead
class. Currently, the supported args are:float
, optional
, defaults to None
) — The dropout probability for the
ValueHead
class.float
, optional
, defaults to 0.2
) — The initializer range for the
ValueHead
if a specific initialization strategy is selected.str
, optional
, defaults to None
) — The initialization strategy for the
ValueHead
. Currently, the supported strategies are:None
— Initializes the weights of the ValueHead
with a random distribution. This is the default
strategy.ValueHead
with a normal distribution.( pretrained_model **kwargs )
Initializes the model.
( input_ids = None past_key_values = None attention_mask = None **kwargs )
Parameters
[0, 1]
:Applies a forward pass to the wrapped model and returns the logits of the value head.
( *args **kwargs )
A simple wrapper around the generate
method of the wrapped model.
Please refer to the generate
method of the wrapped model for more information about the supported arguments.
( **kwargs )
Initializes the weights of the value head. The default initialization strategy is random.
Users can pass a different initialization strategy by passing the v_head_init_strategy
argument
when calling .from_pretrained
. Supported strategies are:
normal
: initializes the weights with a normal distribution.( pretrained_model **kwargs )
A seq2seq model with a value head in addition to the language model head.
This class inherits from ~trl.PreTrainedModelWrapper
and wraps a
transformers.PreTrainedModel
class. The wrapper class supports classic functions
such as from_pretrained
and push_to_hub
and also provides some additional
functionalities such as generate
.
We call generate
on the wrapped model.
We initialize the weights of the value head.
( model: PreTrainedModelWrapper num_shared_layers: Optional = None pattern: Optional = None )
Parameters
PreTrainedModelWrapper
) — The model to be copied. int
, optional) — The number of initial layers that are shared between both models and kept frozen. str
, optional) — The shared layers are selected with a string pattern
(e.g. “transformer.h.{layer}” for GPT2) and if a custom pattern is necessary it can be passed here. Creates a static reference copy of a model. Note that model will be in .eval()
mode.
Returns
PreTrainedModelWrapper