Update README.md
Browse files
README.md
CHANGED
@@ -21,3 +21,22 @@ configs:
|
|
21 |
- split: train
|
22 |
path: data/train-*
|
23 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
- split: train
|
22 |
path: data/train-*
|
23 |
---
|
24 |
+
|
25 |
+
<h1 align="center"> Text-Based Reasoning About Vector Graphics </h1>
|
26 |
+
|
27 |
+
<p align="center">
|
28 |
+
<a href="https://mikewangwzhl.github.io/vdlm.github.io/">🌐 Homepage</a>
|
29 |
+
•
|
30 |
+
<a href="">📃 Paper</a>
|
31 |
+
•
|
32 |
+
<a href="https://huggingface.co/datasets/mikewang/PVD-160K" >🤗 Data (PVD-160k)</a>
|
33 |
+
•
|
34 |
+
<a href="https://huggingface.co/mikewang/PVD-160k-Mistral-7b" >🤗 Model (PVD-160k-Mistral-7b)</a>
|
35 |
+
•
|
36 |
+
<a href="https://github.com/MikeWangWZHL/VDLM" >💻 Code</a>
|
37 |
+
|
38 |
+
</p>
|
39 |
+
|
40 |
+
We propose **VDLM**, a text-based visual reasoning framework for vector graphics. VDLM operates on text-based visual descriptions—specifically, SVG representations and learned Primal Visual Descriptions (PVD), enabling zero-shot reasoning with an off-the-shelf LLM. We demonstrate that VDLM outperforms state-of-the-art large multimodal models, such as GPT-4V, across various multimodal reasoning tasks involving vector graphics. See our [paper]() for more details.
|
41 |
+
|
42 |
+
![Overview](https://github.com/MikeWangWZHL/VDLM/blob/main/figures/overview.png?raw=true)
|