--- inference: false license: apache-2.0 ---

# LLaVA-Hound Model Card ## Model details **Model type:** LLaVA-Hound is an open-source video large multimodal model, fine-tuned from video instruction following data based on large language model. This model is the **pre-trained** ckpt on **image and video caption** data. Base LLM: [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) **Model date:** Trained on March 15, 2024. **Paper or resources for more information:** https://github.com/RifleZhang/LLaVA-Hound-DPO ## License [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) license. **Where to send questions or comments about the model:** https://github.com/RifleZhang/LLaVA-Hound-DPO/issues ## Intended use **Primary intended uses:** Video detailed captioning **Primary intended users:** Researchers in artificial intelligence, large multimodal model, etc. ## Training dataset ShareGPTVideo dataset. ## Evaluation Follow https://github.com/RifleZhang/LLaVA-Hound-DPO/blob/main/README.md ## Paper https://huggingface.co/papers/2404.01258 citation ``` @article{zhang2024direct, title={Direct Preference Optimization of Video Large Multimodal Models from Language Model Reward}, author={Zhang, Ruohong and Gui, Liangke and Sun, Zhiqing and Feng, Yihao and Xu, Keyang and Zhang, Yuanhan and Fu, Di and Li, Chunyuan and Hauptmann, Alexander and Bisk, Yonatan and others}, journal={arXiv preprint arXiv:2404.01258}, year={2024} } ```