Low-Rank Hadamard Product (LoHa), is similar to LoRA except it approximates the large weight matrix with more low-rank matrices and combines them with the Hadamard product. This method is even more parameter-efficient than LoRA and achieves comparable performance.
The abstract from the paper is:
In this work, we propose a communication-efficient parameterization, FedPara, for federated learning (FL) to overcome the burdens on frequent model uploads and downloads. Our method re-parameterizes weight parameters of layers using low-rank weights followed by the Hadamard product. Compared to the conventional low-rank parameterization, our FedPara method is not restricted to low-rank constraints, and thereby it has a far larger capacity. This property enables to achieve comparable performance while requiring 3 to 10 times lower communication costs than the model with the original layers, which is not achievable by the traditional low-rank methods. The efficiency of our method can be further improved by combining with other efficient FL optimizers. In addition, we extend our method to a personalized FL application, pFedPara, which separates parameters into global and local ones. We show that pFedPara outperforms competing personalized FL methods with more than three times fewer parameters.
( peft_type: typing.Union[str, peft.utils.peft_types.PeftType, NoneType] = None auto_mapping: typing.Optional[dict] = None base_model_name_or_path: typing.Optional[str] = None revision: typing.Optional[str] = None task_type: typing.Union[str, peft.utils.peft_types.TaskType, NoneType] = None inference_mode: bool = False rank_pattern: Optional[dict] = <factory> alpha_pattern: Optional[dict] = <factory> r: int = 8 alpha: int = 8 rank_dropout: float = 0.0 module_dropout: float = 0.0 use_effective_conv2d: bool = False target_modules: typing.Union[typing.List[str], str, NoneType] = None init_weights: bool = True layers_to_transform: typing.Union[typing.List[int], int, NoneType] = None layers_pattern: typing.Optional[str] = None modules_to_save: typing.Optional[typing.List[str]] = None )
Parameters
int
) — LoHa rank. int
) — The alpha parameter for LoHa scaling. int
) — The dropout probability for rank dimension during training. int
) — The dropout probability for disabling LoHa modules during training. bool
) —
Use parameter effective decomposition for Conv2d with ksize > 1 (“Proposition 3” from FedPara paper). Union[List[str],str]
) — The names of the modules to apply LoHa to. bool
) — Whether to perform initialization of LoHa weights. Union[List[int],int]
) —
The layer indexes to transform, if this argument is specified, it will apply the LoHa transformations on
the layer indexes that are specified in this list. If a single integer is passed, it will apply the LoHa
transformations on the layer at this index. str
) —
The layer pattern name, used only if layers_to_transform
is different from None
and if the layer
pattern is not in the common layers pattern. dict
) —
The mapping from layer names or regexp expression to ranks which are different from the default rank
specified by r
. dict
) —
The mapping from layer names or regexp expression to alphas which are different from the default alpha
specified by alpha
. List[str]
) — The names of modules to be set as trainable except LoHa parameters. This is the configuration class to store the configuration of a LoHaModel.
( model config adapter_name ) → torch.nn.Module
Parameters
torch.nn.Module
) — The model to which the adapter tuner layers will be attached. str
) — The name of the adapter, defaults to "default"
. Returns
torch.nn.Module
The LoHa model.
Creates Low-Rank Hadamard Product model from a pretrained model. The method is partially described in https://arxiv.org/abs/2108.06098 Current implementation heavily borrows from https://github.com/KohakuBlueleaf/LyCORIS/blob/eb460098187f752a5d66406d3affade6f0a07ece/lycoris/modules/loha.py
Example:
>>> from diffusers import StableDiffusionPipeline
>>> from peft import LoHaModel, LoHaConfig
>>> config_te = LoHaConfig(
... r=8,
... lora_alpha=32,
... target_modules=["k_proj", "q_proj", "v_proj", "out_proj", "fc1", "fc2"],
... rank_dropout=0.0,
... module_dropout=0.0,
... init_weights=True,
... )
>>> config_unet = LoHaConfig(
... r=8,
... lora_alpha=32,
... target_modules=[
... "proj_in",
... "proj_out",
... "to_k",
... "to_q",
... "to_v",
... "to_out.0",
... "ff.net.0.proj",
... "ff.net.2",
... ],
... rank_dropout=0.0,
... module_dropout=0.0,
... init_weights=True,
... use_effective_conv2d=True,
... )
>>> model = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> model.text_encoder = LoHaModel(model.text_encoder, config_te, "default")
>>> model.unet = LoHaModel(model.unet, config_unet, "default")
Attributes:
~torch.nn.Module
) — The model to be adapted.