Spaces:
Running
on
Zero
Running
on
Zero
InternVideo: Video Foundation Models for Multimodal Understanding
This repo contains InternVideo series and related works in video foundation models.
- InternVideo: general video foundation models via generative and discriminative learning
- InternVideo2: scaling video foundation models for multimodal video understanding
- InternVid: a large-scale video-text dataset for multimodal understanding and generation
Updates
2024.03
: The technical report of InternVideo2 is released.2024.01
: InternVid (a video-text dataset for video understanding and generation) has been accepted for spotlight presentation of ICLR 2024.2023.07
: A video-text dataset InternVid is released at here for facilitating multimodal understanding and generation.2023.05
: Video instruction data are released at here for tuning end-to-end video-centric multimodal dialogue systems like VideoChat.2023.01
: The code & models of InternVideo are released.2022.12
: The technical report of InternVideo is released.2022.09
: Press releases of InternVideo (official | 163 news | qq news).
Contact
If you have any questions during the trial, running or deployment, feel free to join our WeChat group discussion! If you have any ideas or suggestions for the project, you are also welcome to join our WeChat group discussion!
We are hiring researchers, engineers and interns in General Vision Group, Shanghai AI Lab. If you are interested in working with us on video foundation models and related topics, please contact Yi Wang ([email protected]).