metadata
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.1
dataset: sshh12/imagebind-llava-finetune
tags:
- finetuned
- multimodal
inference: false
These are weights for a version of mistralai/Mistral-7B-Instruct-v0.1
finetuned for multimodal applications.
Modalities
- ImageBindModality (use
<imagebind>
in text and provideimagebinds
, encoded as 4 tokens)
Dataset
sshh12/imagebind-llava-finetune (235163 examples)
{'id': '000000334872', 'imagebinds': ['/data/llava_finetune_data/images/coco/train2017/train2017/000000334872.jpg'], 'messages': [{'content': '<imagebind>\nAre the people in the audio skiing downhill or cross-country skiing?', 'role': 'user'}, {'content': 'The people in the audio are cross-country skiing in the woods, as they are skiing on a trail rather than a steep slope.', 'role': 'assistant'}, {'content': 'How many people are in the audio?', 'role': 'user'}, {'content': 'There are two people in the audio, both on skis in the snow.', 'role': 'assistant'}, {'content': 'What kind of environment are they skiing in?', 'role': 'user'}, {'content': 'They are skiing in a wooded environment, following a trail through the trees while surrounded by snow.', 'role': 'assistant'}, {'content': 'Do the skiers have any additional gear with them besides their skis and poles?', 'role': 'user'}, {'content': 'Yes, the two male skiers are carrying backpacks while they ski through the woods. The backpacks might contain essentials for their skiing adventure, such as food, water, extra clothing, or safety equipment.', 'role': 'assistant'}]}
Training Device(s)
name, pci.bus_id, vbios_version
NVIDIA GeForce RTX 4090, 00000000:03:00.0, 95.02.3C.00.8C
Usage
GitHub: https://github.com/sshh12/multi_token
Model
MistralLMMForCausalLM.model =
PeftModelForCausalLM(
(base_model): LoraModel(
(model): MistralLMMForCausalLM(
(model): MistralLMMModel(
(embed_tokens): Embedding(32000, 4096)
(layers): ModuleList(
(0-31): 32 x MistralDecoderLayer(
(self_attn): MistralAttention(
(q_proj): Linear(
in_features=4096, out_features=4096, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=4096, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=4096, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
)
(k_proj): Linear(
in_features=4096, out_features=1024, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=4096, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=1024, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
)
(v_proj): Linear(
in_features=4096, out_features=1024, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=4096, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=1024, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
)
(o_proj): Linear(
in_features=4096, out_features=4096, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=4096, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=4096, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
)
(rotary_emb): MistralRotaryEmbedding()
)
(mlp): MistralMLP(
(gate_proj): Linear(
in_features=4096, out_features=14336, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=4096, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=14336, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
)
(up_proj): Linear(
in_features=4096, out_features=14336, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=4096, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=14336, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
)
(down_proj): Linear(
in_features=14336, out_features=4096, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=14336, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=4096, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
)
(act_fn): SiLUActivation()
)
(input_layernorm): MistralRMSNorm()
(post_attention_layernorm): MistralRMSNorm()
)
)
(norm): MistralRMSNorm()
(imagebind_lmm_projector): _MLPVectorProjector(
(mlps): ModuleList(
(0-3): 4 x Sequential(
(0): Linear(in_features=1024, out_features=4096, bias=True)
(1): GELU(approximate='none')
(2): Linear(in_features=4096, out_features=4096, bias=True)
)
)
)
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
)
)
Training procedure
Framework versions
- PEFT 0.5.0