PeftModel is the base model class for specifying the base Transformer model and configuration to apply a PEFT method to. The base PeftModel
contains methods for loading and saving models from the Hub.
( model: PreTrainedModel peft_config: PeftConfig adapter_name: str = 'default' )
Parameters
str
, optional) — The name of the adapter, defaults to "default"
. Base model encompassing various Peft methods.
Attributes:
torch.nn.Module
) — The base transformer model used for Peft.list
of str
) — The list of sub-module names to save when
saving the model.torch.Tensor
) — The virtual prompt tokens used for Peft if
using PromptLearningConfig.str
) — The name of the transformer
backbone in the base model if using PromptLearningConfig.torch.nn.Embedding
) — The word embeddings of the transformer backbone
in the base model if using PromptLearningConfig.( adapter_name: str peft_config: PeftConfig )
Parameters
str
) —
The name of the adapter to be added. Add an adapter to the model based on the passed configuration.
The name for the new adapter should be unique.
The new adapter is not automatically set as the active adapter. Use PeftModel.set_adapter() to set the active adapter.
Updates or create model card to include information about peft:
peft
library tagContext manager that disables the adapter module. Use this to run inference on the base model.
Forward pass of the model.
( model: torch.nn.Module model_id: Union[str, os.PathLike] adapter_name: str = 'default' is_trainable: bool = False config: Optional[PeftConfig] = None **kwargs: Any )
Parameters
torch.nn.Module
) —
The model to be adapted. For 🤗 Transformers models, the model should be initialized with the
from_pretrained. str
or os.PathLike
) —
The name of the PEFT configuration to use. Can be either:model id
of a PEFT configuration hosted inside a model repo on the Hugging Face
Hub.save_pretrained
method (./my_peft_config_directory/
).str
, optional, defaults to "default"
) —
The name of the adapter to be loaded. This is useful for loading multiple adapters. bool
, optional, defaults to False
) —
Whether the adapter should be trainable or not. If False
, the adapter will be frozen and can only be
used for inference. model_id
and kwargs
. This is useful when configuration is already
loaded before calling from_pretrained
.
kwargs — (optional
):
Additional keyword arguments passed along to the specific PEFT configuration class. Instantiate a PEFT model from a pretrained model and loaded PEFT weights.
Note that the passed model
may be modified inplace.
Returns the base model.
Returns the number of trainable parameters and the number of all parameters in the model.
Returns the virtual prompts to use for Peft. Only applicable when using a prompt learning method.
Returns the prompt embedding to save when saving the model. Only applicable when using a prompt learning method.
( model_id: str adapter_name: str is_trainable: bool = False **kwargs: Any )
Parameters
str
) —
The name of the adapter to be added. bool
, optional, defaults to False
) —
Whether the adapter should be trainable or not. If False
, the adapter will be frozen and can only be
used for inference.
kwargs — (optional
):
Additional arguments to modify the way the adapter is loaded, e.g. the token for Hugging Face Hub. Load a trained adapter into the model.
The name for the new adapter should be unique.
The new adapter is not automatically set as the active adapter. Use PeftModel.set_adapter() to set the active adapter.
Prints the number of trainable parameters in the model.
( save_directory: str safe_serialization: bool = True selected_adapters: Optional[List[str]] = None save_embedding_layers: Union[str, bool] = 'auto' is_main_process: bool = True **kwargs: Any )
Parameters
str
) —
Directory where the adapter model and configuration files will be saved (will be created if it does not
exist). bool
, optional) —
Whether to save the adapter files in safetensors format, defaults to True
. List[str]
, optional) —
A list of adapters to be saved. If None
, will default to all adapters. Union[bool, str]
, optional, defaults to "auto"
) —
If True
, save the embedding layers in addition to adapter weights. If auto
, checks the common
embedding layers peft.utils.other.EMBEDDING_LAYER_NAMES
in config’s target_modules
when available.
and automatically sets the boolean flag. This only works for 🤗 transformers models. bool
, optional) —
Whether the process calling this is the main process or not. Will default to True
. Will not save the
checkpoint if not on the main process, which is important for multi device setups (e.g. DDP). push_to_hub
method. This function saves the adapter model and the adapter configuration files to a directory, so that it can be
reloaded using the PeftModel.from_pretrained() class method, and also used by the PeftModel.push_to_hub()
method.
( adapter_name: str )
Sets the active adapter.
Only one adapter can be active at a time.
A PeftModel
for sequence classification tasks.
( model: torch.nn.Module peft_config: PeftConfig adapter_name: str = 'default' )
Parameters
Peft model for sequence classification tasks.
Attributes:
str
) — The name of the classification layer.Example:
>>> from transformers import AutoModelForSequenceClassification
>>> from peft import PeftModelForSequenceClassification, get_peft_config
>>> config = {
... "peft_type": "PREFIX_TUNING",
... "task_type": "SEQ_CLS",
... "inference_mode": False,
... "num_virtual_tokens": 20,
... "token_dim": 768,
... "num_transformer_submodules": 1,
... "num_attention_heads": 12,
... "num_layers": 12,
... "encoder_hidden_size": 768,
... "prefix_projection": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForSequenceClassification(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 370178 || all params: 108680450 || trainable%: 0.3406113979101117
A PeftModel
for token classification tasks.
( model: torch.nn.Module peft_config: PeftConfig = None adapter_name: str = 'default' )
Parameters
Peft model for token classification tasks.
Attributes:
str
) — The name of the classification layer.Example:
>>> from transformers import AutoModelForSequenceClassification
>>> from peft import PeftModelForTokenClassification, get_peft_config
>>> config = {
... "peft_type": "PREFIX_TUNING",
... "task_type": "TOKEN_CLS",
... "inference_mode": False,
... "num_virtual_tokens": 20,
... "token_dim": 768,
... "num_transformer_submodules": 1,
... "num_attention_heads": 12,
... "num_layers": 12,
... "encoder_hidden_size": 768,
... "prefix_projection": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForTokenClassification.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForTokenClassification(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 370178 || all params: 108680450 || trainable%: 0.3406113979101117
A PeftModel
for causal language modeling.
( model: torch.nn.Module peft_config: PeftConfig adapter_name: str = 'default' )
Parameters
Peft model for causal language modeling.
Example:
>>> from transformers import AutoModelForCausalLM
>>> from peft import PeftModelForCausalLM, get_peft_config
>>> config = {
... "peft_type": "PREFIX_TUNING",
... "task_type": "CAUSAL_LM",
... "inference_mode": False,
... "num_virtual_tokens": 20,
... "token_dim": 1280,
... "num_transformer_submodules": 1,
... "num_attention_heads": 20,
... "num_layers": 36,
... "encoder_hidden_size": 1280,
... "prefix_projection": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForCausalLM.from_pretrained("gpt2-large")
>>> peft_model = PeftModelForCausalLM(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 1843200 || all params: 775873280 || trainable%: 0.23756456724479544
A PeftModel
for sequence-to-sequence language modeling.
( model: torch.nn.Module peft_config: PeftConfig adapter_name: str = 'default' )
Parameters
Peft model for sequence-to-sequence language modeling.
Example:
>>> from transformers import AutoModelForSeq2SeqLM
>>> from peft import PeftModelForSeq2SeqLM, get_peft_config
>>> config = {
... "peft_type": "LORA",
... "task_type": "SEQ_2_SEQ_LM",
... "inference_mode": False,
... "r": 8,
... "target_modules": ["q", "v"],
... "lora_alpha": 32,
... "lora_dropout": 0.1,
... "fan_in_fan_out": False,
... "enable_lora": None,
... "bias": "none",
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> peft_model = PeftModelForSeq2SeqLM(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 884736 || all params: 223843584 || trainable%: 0.3952474242013566
A PeftModel
for question answering.
( model: torch.nn.Module peft_config: PeftConfig adapter_name: str = 'default' )
Parameters
Peft model for extractive question answering.
Attributes:
str
) — The name of the classification layer.Example:
>>> from transformers import AutoModelForQuestionAnswering
>>> from peft import PeftModelForQuestionAnswering, get_peft_config
>>> config = {
... "peft_type": "LORA",
... "task_type": "QUESTION_ANS",
... "inference_mode": False,
... "r": 16,
... "target_modules": ["query", "value"],
... "lora_alpha": 32,
... "lora_dropout": 0.05,
... "fan_in_fan_out": False,
... "bias": "none",
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForQuestionAnswering.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForQuestionAnswering(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 592900 || all params: 108312580 || trainable%: 0.5473971721475013
A PeftModel
for getting extracting features/embeddings from transformer models.
( model: torch.nn.Module peft_config: PeftConfig adapter_name: str = 'default' )
Parameters
Peft model for extracting features/embeddings from transformer models
Attributes:
Example:
>>> from transformers import AutoModel
>>> from peft import PeftModelForFeatureExtraction, get_peft_config
>>> config = {
... "peft_type": "LORA",
... "task_type": "FEATURE_EXTRACTION",
... "inference_mode": False,
... "r": 16,
... "target_modules": ["query", "value"],
... "lora_alpha": 32,
... "lora_dropout": 0.05,
... "fan_in_fan_out": False,
... "bias": "none",
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModel.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForFeatureExtraction(model, peft_config)
>>> peft_model.print_trainable_parameters()
( model: PreTrainedModel peft_config: PeftConfig adapter_name: str = 'default' mixed: bool = False )
Parameters
str
, optional
, defaults to "default"
) —
The name of the adapter to be injected, if not provided, the default adapter name is used (“default”). bool
, optional
, defaults to False
) —
Whether to allow mixing different (compatible) adapter types. Returns a Peft model object from a model and a config.
( model use_gradient_checkpointing = True gradient_checkpointing_kwargs = None )
Parameters
transformers.PreTrainedModel
) —
The loaded model from transformers
bool
, optional, defaults to True
) —
If True, use gradient checkpointing to save memory at the expense of slower backward pass. dict
, optional, defaults to None
) —
Keyword arguments to pass to the gradient checkpointing function, please refer to the documentation of
torch.utils.checkpoint.checkpoint
for more details about the arguments that you can pass to that method.
Note this is only available in the latest transformers versions (> 4.34.1). Note this method only works for transformers
models.
This method wraps the entire protocol for preparing a model before running a training. This includes: 1- Cast the layernorm in fp32 2- making output embedding layer require grads 3- Add the upcasting of the lm head to fp32