LanguageBind
commited on
Commit
•
0d50857
1
Parent(s):
e16778e
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,238 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
|
5 |
+
|
6 |
+
<p align="center">
|
7 |
+
<img src="https://z1.ax1x.com/2023/11/07/pil4sqH.png" width="150" style="margin-bottom: 0.2;"/>
|
8 |
+
<p>
|
9 |
+
<h2 align="center"> <a href="https://arxiv.org/abs/2311.10122">Video-LLaVA: Learning United Visual Representation by Alignment Before Projection</a></h2>
|
10 |
+
<h5 align="center"> If you like our project, please give us a star ⭐ on GitHub for latest update. </h2>
|
11 |
+
|
12 |
+
|
13 |
+
|
14 |
+
|
15 |
+
## 📰 News
|
16 |
+
* **[2024.01.27]** 👀👀👀 Our [MoE-LLaVA](https://github.com/PKU-YuanGroup/MoE-LLaVA) is released! A sparse model with 3B parameters outperformed the dense model with 7B parameters.
|
17 |
+
* **[2024.01.17]** 🔥🔥🔥 Our [LanguageBind](https://github.com/PKU-YuanGroup/LanguageBind) has been accepted at ICLR 2024!
|
18 |
+
* **[2024.01.16]** 🔥🔥🔥 We reorganize the code and support LoRA fine-tuning, checking [finetune_lora.sh](scripts/v1_5/finetune_lora.sh).
|
19 |
+
* **[2023.11.30]** 🤝 Thanks to the generous contributions of the community, the [OpenXLab's demo](https://openxlab.org.cn/apps/detail/houshaowei/Video-LLaVA) is now accessible.
|
20 |
+
* **[2023.11.23]** We are training a new and powerful model.
|
21 |
+
* **[2023.11.21]** 🤝 Check out the [replicate demo](https://replicate.com/nateraw/video-llava), created by [@nateraw](https://github.com/nateraw), who has generously supported our research!
|
22 |
+
* **[2023.11.20]** 🤗 [Hugging Face demo](https://huggingface.co/spaces/LanguageBind/Video-LLaVA) and **all codes & datasets** are available now! Welcome to **watch** 👀 this repository for the latest updates.
|
23 |
+
|
24 |
+
## 😮 Highlights
|
25 |
+
|
26 |
+
Video-LLaVA exhibits remarkable interactive capabilities between images and videos, despite the absence of image-video pairs in the dataset.
|
27 |
+
|
28 |
+
### 💡 Simple baseline, learning united visual representation by alignment before projection
|
29 |
+
- With **the binding of unified visual representations to the language feature space**, we enable an LLM to perform visual reasoning capabilities on both images and videos simultaneously.
|
30 |
+
|
31 |
+
### 🔥 High performance, complementary learning with video and image
|
32 |
+
- Extensive experiments demonstrate **the complementarity of modalities**, showcasing significant superiority when compared to models specifically designed for either images or videos.
|
33 |
+
|
34 |
+
|
35 |
+
## 🤗 Demo
|
36 |
+
|
37 |
+
### Gradio Web UI
|
38 |
+
|
39 |
+
Highly recommend trying out our web demo by the following command, which incorporates all features currently supported by Video-LLaVA. We also provide [online demo](https://huggingface.co/spaces/LanguageBind/Video-LLaVA) in Huggingface Spaces.
|
40 |
+
```bash
|
41 |
+
python -m videollava.serve.gradio_web_server
|
42 |
+
```
|
43 |
+
|
44 |
+
|
45 |
+
|
46 |
+
### CLI Inference
|
47 |
+
|
48 |
+
```bash
|
49 |
+
python -m videollava.serve.cli --model-path "LanguageBind/Video-LLaVA-7B" --file "path/to/your/video.mp4" --load-4bit
|
50 |
+
```
|
51 |
+
|
52 |
+
```bash
|
53 |
+
python -m videollava.serve.cli --model-path "LanguageBind/Video-LLaVA-7B" --file "path/to/your/image.jpg" --load-4bit
|
54 |
+
```
|
55 |
+
|
56 |
+
|
57 |
+
|
58 |
+
## 🛠️ Requirements and Installation
|
59 |
+
* Python >= 3.10
|
60 |
+
* Pytorch == 2.0.1
|
61 |
+
* CUDA Version >= 11.7
|
62 |
+
* Install required packages:
|
63 |
+
```bash
|
64 |
+
git clone https://github.com/PKU-YuanGroup/Video-LLaVA
|
65 |
+
cd Video-LLaVA
|
66 |
+
conda create -n videollava python=3.10 -y
|
67 |
+
conda activate videollava
|
68 |
+
pip install --upgrade pip # enable PEP 660 support
|
69 |
+
pip install -e .
|
70 |
+
pip install -e ".[train]"
|
71 |
+
pip install flash-attn --no-build-isolation
|
72 |
+
pip install decord opencv-python git+https://github.com/facebookresearch/pytorchvideo.git@28fe037d212663c6a24f373b94cc5d478c8c1a1d
|
73 |
+
```
|
74 |
+
|
75 |
+
## 🤖 API
|
76 |
+
**We open source all codes.** If you want to load the model (e.g. ```LanguageBind/Video-LLaVA-7B```) on local, you can use the following code snippets.
|
77 |
+
|
78 |
+
### Inference for image
|
79 |
+
```python
|
80 |
+
import torch
|
81 |
+
from videollava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN
|
82 |
+
from videollava.conversation import conv_templates, SeparatorStyle
|
83 |
+
from videollava.model.builder import load_pretrained_model
|
84 |
+
from videollava.utils import disable_torch_init
|
85 |
+
from videollava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria
|
86 |
+
|
87 |
+
def main():
|
88 |
+
disable_torch_init()
|
89 |
+
image = 'videollava/serve/examples/extreme_ironing.jpg'
|
90 |
+
inp = 'What is unusual about this image?'
|
91 |
+
model_path = 'LanguageBind/Video-LLaVA-7B'
|
92 |
+
cache_dir = 'cache_dir'
|
93 |
+
device = 'cuda'
|
94 |
+
load_4bit, load_8bit = True, False
|
95 |
+
model_name = get_model_name_from_path(model_path)
|
96 |
+
tokenizer, model, processor, _ = load_pretrained_model(model_path, None, model_name, load_8bit, load_4bit, device=device, cache_dir=cache_dir)
|
97 |
+
image_processor = processor['image']
|
98 |
+
conv_mode = "llava_v1"
|
99 |
+
conv = conv_templates[conv_mode].copy()
|
100 |
+
roles = conv.roles
|
101 |
+
|
102 |
+
image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values']
|
103 |
+
if type(image_tensor) is list:
|
104 |
+
tensor = [image.to(model.device, dtype=torch.float16) for image in image_tensor]
|
105 |
+
else:
|
106 |
+
tensor = image_tensor.to(model.device, dtype=torch.float16)
|
107 |
+
|
108 |
+
print(f"{roles[1]}: {inp}")
|
109 |
+
inp = DEFAULT_IMAGE_TOKEN + '\n' + inp
|
110 |
+
conv.append_message(conv.roles[0], inp)
|
111 |
+
conv.append_message(conv.roles[1], None)
|
112 |
+
prompt = conv.get_prompt()
|
113 |
+
input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
|
114 |
+
stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
|
115 |
+
keywords = [stop_str]
|
116 |
+
stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids)
|
117 |
+
|
118 |
+
with torch.inference_mode():
|
119 |
+
output_ids = model.generate(
|
120 |
+
input_ids,
|
121 |
+
images=tensor,
|
122 |
+
do_sample=True,
|
123 |
+
temperature=0.2,
|
124 |
+
max_new_tokens=1024,
|
125 |
+
use_cache=True,
|
126 |
+
stopping_criteria=[stopping_criteria])
|
127 |
+
|
128 |
+
outputs = tokenizer.decode(output_ids[0, input_ids.shape[1]:]).strip()
|
129 |
+
print(outputs)
|
130 |
+
|
131 |
+
if __name__ == '__main__':
|
132 |
+
main()
|
133 |
+
```
|
134 |
+
|
135 |
+
### Inference for video
|
136 |
+
```python
|
137 |
+
import torch
|
138 |
+
from videollava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN
|
139 |
+
from videollava.conversation import conv_templates, SeparatorStyle
|
140 |
+
from videollava.model.builder import load_pretrained_model
|
141 |
+
from videollava.utils import disable_torch_init
|
142 |
+
from videollava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria
|
143 |
+
|
144 |
+
def main():
|
145 |
+
disable_torch_init()
|
146 |
+
video = 'videollava/serve/examples/sample_demo_1.mp4'
|
147 |
+
inp = 'Why is this video funny?'
|
148 |
+
model_path = 'LanguageBind/Video-LLaVA-7B'
|
149 |
+
cache_dir = 'cache_dir'
|
150 |
+
device = 'cuda'
|
151 |
+
load_4bit, load_8bit = True, False
|
152 |
+
model_name = get_model_name_from_path(model_path)
|
153 |
+
tokenizer, model, processor, _ = load_pretrained_model(model_path, None, model_name, load_8bit, load_4bit, device=device, cache_dir=cache_dir)
|
154 |
+
video_processor = processor['video']
|
155 |
+
conv_mode = "llava_v1"
|
156 |
+
conv = conv_templates[conv_mode].copy()
|
157 |
+
roles = conv.roles
|
158 |
+
|
159 |
+
video_tensor = video_processor(video, return_tensors='pt')['pixel_values']
|
160 |
+
if type(video_tensor) is list:
|
161 |
+
tensor = [video.to(model.device, dtype=torch.float16) for video in video_tensor]
|
162 |
+
else:
|
163 |
+
tensor = video_tensor.to(model.device, dtype=torch.float16)
|
164 |
+
|
165 |
+
print(f"{roles[1]}: {inp}")
|
166 |
+
inp = ' '.join([DEFAULT_IMAGE_TOKEN] * model.get_video_tower().config.num_frames) + '\n' + inp
|
167 |
+
conv.append_message(conv.roles[0], inp)
|
168 |
+
conv.append_message(conv.roles[1], None)
|
169 |
+
prompt = conv.get_prompt()
|
170 |
+
input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
|
171 |
+
stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
|
172 |
+
keywords = [stop_str]
|
173 |
+
stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids)
|
174 |
+
|
175 |
+
with torch.inference_mode():
|
176 |
+
output_ids = model.generate(
|
177 |
+
input_ids,
|
178 |
+
images=tensor,
|
179 |
+
do_sample=True,
|
180 |
+
temperature=0.1,
|
181 |
+
max_new_tokens=1024,
|
182 |
+
use_cache=True,
|
183 |
+
stopping_criteria=[stopping_criteria])
|
184 |
+
|
185 |
+
outputs = tokenizer.decode(output_ids[0, input_ids.shape[1]:]).strip()
|
186 |
+
print(outputs)
|
187 |
+
|
188 |
+
if __name__ == '__main__':
|
189 |
+
main()
|
190 |
+
```
|
191 |
+
|
192 |
+
## 🗝️ Training & Validating
|
193 |
+
The training & validating instruction is in [TRAIN_AND_VALIDATE.md](TRAIN_AND_VALIDATE.md).
|
194 |
+
|
195 |
+
## 👍 Acknowledgement
|
196 |
+
* [LLaVA](https://github.com/haotian-liu/LLaVA) The codebase we built upon and it is an efficient large language and vision assistant.
|
197 |
+
* [Video-ChatGPT](https://github.com/mbzuai-oryx/Video-ChatGPT) Great job contributing the evaluation code and dataset.
|
198 |
+
|
199 |
+
## 🙌 Related Projects
|
200 |
+
* [LanguageBind](https://github.com/PKU-YuanGroup/LanguageBind) An open source five modalities language-based retrieval framework.
|
201 |
+
* [Chat-UniVi](https://github.com/PKU-YuanGroup/Chat-UniVi) This framework empowers the model to efficiently utilize a limited number of visual tokens.
|
202 |
+
|
203 |
+
## 🔒 License
|
204 |
+
* The majority of this project is released under the Apache 2.0 license as found in the [LICENSE](https://github.com/PKU-YuanGroup/Video-LLaVA/blob/main/LICENSE) file.
|
205 |
+
* The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
|
206 |
+
|
207 |
+
## ✏️ Citation
|
208 |
+
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:.
|
209 |
+
|
210 |
+
```BibTeX
|
211 |
+
@article{lin2023video,
|
212 |
+
title={Video-LLaVA: Learning United Visual Representation by Alignment Before Projection},
|
213 |
+
author={Lin, Bin and Zhu, Bin and Ye, Yang and Ning, Munan and Jin, Peng and Yuan, Li},
|
214 |
+
journal={arXiv preprint arXiv:2311.10122},
|
215 |
+
year={2023}
|
216 |
+
}
|
217 |
+
```
|
218 |
+
|
219 |
+
```BibTeX
|
220 |
+
@article{zhu2023languagebind,
|
221 |
+
title={LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment},
|
222 |
+
author={Zhu, Bin and Lin, Bin and Ning, Munan and Yan, Yang and Cui, Jiaxi and Wang, HongFa and Pang, Yatian and Jiang, Wenhao and Zhang, Junwu and Li, Zongwei and others},
|
223 |
+
journal={arXiv preprint arXiv:2310.01852},
|
224 |
+
year={2023}
|
225 |
+
}
|
226 |
+
```
|
227 |
+
|
228 |
+
<!---->
|
229 |
+
## ✨ Star History
|
230 |
+
[![Star History](https://api.star-history.com/svg?repos=PKU-YuanGroup/Video-LLaVA&type=Date)](https://star-history.com/#PKU-YuanGroup/Video-LLaVA&Date)
|
231 |
+
|
232 |
+
## 🤝 Contributors
|
233 |
+
|
234 |
+
<a href="https://github.com/PKU-YuanGroup/Video-LLaVA/graphs/contributors">
|
235 |
+
<img src="https://contrib.rocks/image?repo=PKU-YuanGroup/Video-LLaVA" />
|
236 |
+
</a>
|
237 |
+
|
238 |
+
|