--- dataset_info: features: - name: id dtype: string - name: conversations list: - name: role dtype: string - name: content dtype: string splits: - name: train num_bytes: 277884785 num_examples: 160000 download_size: 126665150 dataset_size: 277884785 configs: - config_name: default data_files: - split: train path: data/train-* ---
🌐 Homepage • 📃 Paper • 🤗 Data (PVD-160k) • 🤗 Model (PVD-160k-Mistral-7b) • 💻 Code
We propose **VDLM**, a text-based visual reasoning framework for vector graphics. VDLM operates on text-based visual descriptions—specifically, SVG representations and learned Primal Visual Descriptions (PVD), enabling zero-shot reasoning with an off-the-shelf LLM. We demonstrate that VDLM outperforms state-of-the-art large multimodal models, such as GPT-4V, across various multimodal reasoning tasks involving vector graphics. See our [paper]() for more details. ![Overview](https://github.com/MikeWangWZHL/VDLM/blob/main/figures/overview.png?raw=true)