mikewang commited on
Commit
79c8632
1 Parent(s): 9c0b1a2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -3
README.md CHANGED
@@ -22,12 +22,13 @@ configs:
22
  path: data/train-*
23
  ---
24
 
 
25
  <h1 align="center"> Text-Based Reasoning About Vector Graphics </h1>
26
 
27
  <p align="center">
28
- <a href="https://mikewangwzhl.github.io/VDLM/">🌐 Homepage</a>
29
 
30
- <a href="">📃 Paper</a>
31
 
32
  <a href="https://huggingface.co/datasets/mikewang/PVD-160K" >🤗 Data (PVD-160k)</a>
33
 
@@ -37,6 +38,11 @@ configs:
37
 
38
  </p>
39
 
40
- We propose **VDLM**, a text-based visual reasoning framework for vector graphics. VDLM operates on text-based visual descriptions—specifically, SVG representations and learned Primal Visual Descriptions (PVD), enabling zero-shot reasoning with an off-the-shelf LLM. We demonstrate that VDLM outperforms state-of-the-art large multimodal models, such as GPT-4V, across various multimodal reasoning tasks involving vector graphics. See our [paper]() for more details.
 
 
 
 
 
41
 
42
  ![Overview](https://github.com/MikeWangWZHL/VDLM/blob/main/figures/overview.png?raw=true)
 
22
  path: data/train-*
23
  ---
24
 
25
+
26
  <h1 align="center"> Text-Based Reasoning About Vector Graphics </h1>
27
 
28
  <p align="center">
29
+ <a href="https://mikewangwzhl.github.io/VDLM">🌐 Homepage</a>
30
 
31
+ <a href="">📃 Paper (Coming Soon)</a>
32
 
33
  <a href="https://huggingface.co/datasets/mikewang/PVD-160K" >🤗 Data (PVD-160k)</a>
34
 
 
38
 
39
  </p>
40
 
41
+
42
+ We observe that current *large multimodal models (LMMs)* still struggle with seemingly straightforward reasoning tasks that require precise perception of low-level visual details, such as identifying spatial relations or solving simple mazes. In particular, this failure mode persists in question-answering tasks about vector graphics—images composed purely of 2D objects and shapes.
43
+
44
+ ![Teaser](https://github.com/MikeWangWZHL/VDLM/blob/main/figures/teaser.png?raw=true)
45
+
46
+ To solve this challenge, we propose **Visually Descriptive Language Model (VDLM)**, a text-based visual reasoning framework for vector graphics. VDLM operates on text-based visual descriptions—specifically, SVG representations and learned Primal Visual Descriptions (PVD), enabling zero-shot reasoning with an off-the-shelf LLM. We demonstrate that VDLM outperforms state-of-the-art large multimodal models, such as GPT-4V, across various multimodal reasoning tasks involving vector graphics. See our [paper (coming soon)]() for more details.
47
 
48
  ![Overview](https://github.com/MikeWangWZHL/VDLM/blob/main/figures/overview.png?raw=true)