Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- video LLM
|
5 |
+
datasets:
|
6 |
+
- OpenGVLab/VideoChat2-IT
|
7 |
+
---
|
8 |
+
|
9 |
+
|
10 |
+
# PLLaVA Model Card
|
11 |
+
## Model details
|
12 |
+
**Model type:**
|
13 |
+
PLLaVA-34B is an open-source video-language chatbot trained by fine-tuning Image-LLM on video instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: liuhaotian/llava-v1.6-34b
|
14 |
+
|
15 |
+
**Model date:**
|
16 |
+
PLLaVA-34B was trained in April 2024.
|
17 |
+
|
18 |
+
**Paper or resources for more information:**
|
19 |
+
- github repo: https://github.com/magic-research/PLLaVA
|
20 |
+
- project page: https://pllava.github.io/
|
21 |
+
- paper link: https://arxiv.org/abs/2404.16994
|
22 |
+
|
23 |
+
## License
|
24 |
+
NousResearch/Nous-Hermes-2-Yi-34B license.
|
25 |
+
|
26 |
+
**Where to send questions or comments about the model:**
|
27 |
+
https://github.com/magic-research/PLLaVA/issues
|
28 |
+
|
29 |
+
## Intended use
|
30 |
+
**Primary intended uses:**
|
31 |
+
The primary use of PLLaVA is research on large multimodal models and chatbots.
|
32 |
+
|
33 |
+
**Primary intended users:**
|
34 |
+
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|
35 |
+
|
36 |
+
## Training dataset
|
37 |
+
Video-Instruct-Tuning data of OpenGVLab/VideoChat2-IT
|
38 |
+
## Evaluation dataset
|
39 |
+
A collection of 6 benchmarks, including 5 Video QA benchmarks and 1 benchmarks specifically proposed for Video-LMMs.
|