JustinLin610
commited on
Commit
•
506b8ba
1
Parent(s):
31d79d2
Update README.md
Browse files
README.md
CHANGED
@@ -18,21 +18,28 @@ We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model,
|
|
18 |
|
19 |
#### Key Enhancements:
|
20 |
|
21 |
-
* **
|
22 |
|
23 |
-
* **
|
24 |
|
25 |
-
* **
|
|
|
|
|
26 |
|
27 |
-
* **Expanded Multilingual Support**: We've broadened our language capabilities to better serve a diverse global user base, making Qwen2-VL more accessible and effective across different linguistic contexts.
|
28 |
|
29 |
#### Model Architecture Updates:
|
30 |
|
31 |
* **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.
|
32 |
|
|
|
|
|
|
|
|
|
33 |
* **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.
|
34 |
|
35 |
-
|
|
|
|
|
36 |
|
37 |
We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL).
|
38 |
|
|
|
18 |
|
19 |
#### Key Enhancements:
|
20 |
|
21 |
+
* **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
|
22 |
|
23 |
+
* **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.
|
24 |
|
25 |
+
* **Agent that can operate your mobiles, robots, ...**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.
|
26 |
+
|
27 |
+
* **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
|
28 |
|
|
|
29 |
|
30 |
#### Model Architecture Updates:
|
31 |
|
32 |
* **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.
|
33 |
|
34 |
+
<p align="center">
|
35 |
+
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/>
|
36 |
+
<p>
|
37 |
+
|
38 |
* **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.
|
39 |
|
40 |
+
<p align="center">
|
41 |
+
<img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/>
|
42 |
+
<p>
|
43 |
|
44 |
We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL).
|
45 |
|