-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 25 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 12 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 36 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 19
Collections
Discover the best community collections!
Collections including paper arxiv:2407.07726
-
RLHF Workflow: From Reward Modeling to Online RLHF
Paper • 2405.07863 • Published • 67 -
Chameleon: Mixed-Modal Early-Fusion Foundation Models
Paper • 2405.09818 • Published • 125 -
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models
Paper • 2405.15574 • Published • 52 -
An Introduction to Vision-Language Modeling
Paper • 2405.17247 • Published • 84
-
iVideoGPT: Interactive VideoGPTs are Scalable World Models
Paper • 2405.15223 • Published • 11 -
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models
Paper • 2405.15574 • Published • 52 -
An Introduction to Vision-Language Modeling
Paper • 2405.17247 • Published • 84 -
Matryoshka Multimodal Models
Paper • 2405.17430 • Published • 30
-
Can Large Language Models Understand Context?
Paper • 2402.00858 • Published • 21 -
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 78 -
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 140 -
SemScore: Automated Evaluation of Instruction-Tuned LLMs based on Semantic Textual Similarity
Paper • 2401.17072 • Published • 25
-
PaliGemma: A versatile 3B VLM for transfer
Paper • 2407.07726 • Published • 64 -
Vision language models are blind
Paper • 2407.06581 • Published • 80 -
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning
Paper • 2404.16994 • Published • 35 -
DeepSeek-VL: Towards Real-World Vision-Language Understanding
Paper • 2403.05525 • Published • 39
-
PaliGemma: A versatile 3B VLM for transfer
Paper • 2407.07726 • Published • 64 -
Vision language models are blind
Paper • 2407.06581 • Published • 80 -
CosmoCLIP: Generalizing Large Vision-Language Models for Astronomical Imaging
Paper • 2407.07315 • Published • 6 -
Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision
Paper • 2407.06189 • Published • 24
-
DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception
Paper • 2407.08303 • Published • 17 -
Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Model
Paper • 2407.07053 • Published • 41 -
PaliGemma: A versatile 3B VLM for transfer
Paper • 2407.07726 • Published • 64 -
LLaVA-NeXT-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal Models
Paper • 2407.07895 • Published • 40