LoRA config used for training
Is it possible to share the LoRA config used for training the base and instruction-tuned model ? I could not find details on the paper.
@Leyo could you help with this?
lora_config stage 1:
{
lora_alpha: 16,
lora_dropout: 0.1,
r: 64,
bias: "none",
}
lora config stage 2:
{
lora_alpha: 16,
lora_dropout: 0.1,
r: 64,
bias: "none",
init_lora_weights: "gaussian",
}
lora_config stage 3 (intruction-tuned):
{
lora_alpha: 16,
lora_dropout: 0.1,
r: 64,
bias: "none",
init_lora_weights: "gaussian",
use_dora: True,
}
thanks for sharing this!
do this apply for all params or just '.(text_model|modality_projection|perceiver_resampler).(down_proj|gate_proj|up_proj|k_proj|q_proj|v_proj|o_proj).*$' ?
@Leyo
Stage 1:
All the projection/mlp layers of the LLM and the vision encoder were loraified. Only embeddings and lm_head were frozen (with the exception of the new image-related embeddings which we trained fully). The perceiver, and the modality projection were fully trained.
Stage 2:
All the projection/mlp layers of the LLM and the vision encoder were loraified. LLM embeddings and lm_head were frozen (with the exception of the new image-related embeddings which we trained fully). The vision encoder embeddings, the perceiver and the modality projection, were fully trained.
Stage 3 (instruction-tuning):
All the projection/mlp layers of the LLM, the vision encoder and the perceiver were loraified as well as the lm_head and the embeddings (with the exception of the new image-related embeddings which we trained fully). The Perceiver latents and vision encoder embeddings, were fully trained. So only the LLM text embeddings that were trained were frozen.
Not everything was perfectly ablated because unfreezing/freezing some parameters made little difference most of the time. We usually tried to unfreeze/loraify as much as could.