RealZhiqiLi
commited on
Commit
•
a38ccb2
1
Parent(s):
81d64b4
Upload README.md
Browse files
README.md
ADDED
@@ -0,0 +1,158 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
# [DCNv4](https://arxiv.org/pdf/2401.06197.pdf)
|
4 |
+
|
5 |
+
|
6 |
+
## News
|
7 |
+
- `Jan 15, 2024`: 🚀 Compared with InternImage, the new FlashInternImage powered with DCNv4 has faster inference speed, faster convergence, and better performance!!!
|
8 |
+
- `Jan 15, 2024`: 🚀 "DCNv4" is released!
|
9 |
+
|
10 |
+
|
11 |
+
## Introduction
|
12 |
+
We introduce Deformable Convolution v4 (DCNv4), a highly efficient and effective operator designed for a broad spectrum of vision applications. DCNv4 addresses the limitations of its predecessor, DCNv3, with two key enhancements: 1. removing softmax normalization in spatial aggregation to enhance its dynamic property and expressive power and 2. optimizing memory access to minimize redundant operations for speedup. These improvements result in a significantly faster convergence compared to DCNv3 and a substantial increase in processing speed, with DCNv4 achieving more than three times the forward speed.
|
13 |
+
DCNv4 demonstrates exceptional performance across various tasks, including image classification, instance and semantic segmentation, and notably, image generation.
|
14 |
+
When integrated into generative models like U-Net in the latent diffusion model, DCNv4 outperforms its baseline, underscoring its possibility to enhance generative models.
|
15 |
+
In practical applications, replacing DCNv3 with DCNv4 in the InternImage model to create FlashInternImage results in up to 80\% speed increase and further performance improvement without further modifications.
|
16 |
+
The advancements in speed and efficiency of DCNv4, combined with its robust performance across diverse vision tasks, show its potential as a foundational building block for future vision models.
|
17 |
+
|
18 |
+
## Released Models
|
19 |
+
|
20 |
+
|
21 |
+
|
22 |
+
<details>
|
23 |
+
<summary> ImageNet Image Classification </summary>
|
24 |
+
<br>
|
25 |
+
<div>
|
26 |
+
|
27 |
+
| name | pretrain | resolution | acc@1 | #param | download |
|
28 |
+
| :------------: | :----------: | :--------: | :---: | :----: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------: |
|
29 |
+
| FlashInternImage-T | ImageNet-1K | 224x224 | 83.6 | 30M | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/flash_intern_image_t_1k_224.pth) \| [cfg](classification/configs/flash_intern_image_t_1k_224.yaml) |
|
30 |
+
| FlashInternImage-S | ImageNet-1K | 224x224 | 84.4 | 50M | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/flash_intern_image_s_1k_224.pth) \| [cfg](classification/configs/flash_intern_image_s_1k_224.yaml) |
|
31 |
+
| FlashInternImage-B | ImageNet-1K | 224x224 | 84.9 | 97M | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/flash_intern_image_b_1k_224.pth) \| [cfg](classification/configs/flash_intern_image_b_1k_224.yaml) |
|
32 |
+
| FlashInternImage-L | ImageNet-22K | 384x384 | 88.1 | 223M | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/flash_internimage_l_22kto1k_384.pth) \| [cfg](classification/configs/flash_intern_image_l_22kto1k_384.yaml) |
|
33 |
+
</div>
|
34 |
+
|
35 |
+
</details>
|
36 |
+
|
37 |
+
<details>
|
38 |
+
<summary> COCO Object Detection and Instance Segmentation </summary>
|
39 |
+
<br>
|
40 |
+
<div>
|
41 |
+
|
42 |
+
| backbone |method | schd | box mAP | mask mAP |Config | Download |
|
43 |
+
| :-----------------:| :----------: | :---------: | :-----: |:------: | :-----: | :---: |
|
44 |
+
| FlashInternImage-T |Mask-RCNN| 1x | 48.0 | 43.1 | [config](./detection/configs/coco/mask_rcnn_flash_intern_image_t_fpn_1x_coco.py) | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask_rcnn_flash_internimage_t_fpn_1x_coco.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask_rcnn_flash_internimage_t_fpn_1x_coco.log) |
|
45 |
+
| FlashInternImage-T |Mask-RCNN | 3x | 49.5 | 44.0 | [config](./detection/configs/coco/mask_rcnn_flash_intern_image_t_fpn_3x_coco.py) | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask_rcnn_flash_internimage_t_fpn_3x_coco.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask_rcnn_flash_internimage_t_fpn_3x_coco.log) |
|
46 |
+
| FlashInternImage-S |Mask-RCNN| 1x | 49.2 | 44.0 | [config](./detection/configs/coco/mask_rcnn_flash_intern_image_s_fpn_1x_coco.py) | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask_rcnn_flash_internimage_s_fpn_1x_coco.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask_rcnn_flash_internimage_s_fpn_1x_coco.log) |
|
47 |
+
| FlashInternImage-S |Mask-RCNN | 3x | 50.5 | 44.9 | [config](./detection/configs/coco/mask_rcnn_flash_intern_image_s_fpn_3x_coco.py) | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask_rcnn_flash_internimage_s_fpn_3x_coco.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask_rcnn_flash_internimage_s_fpn_3x_coco.log) |
|
48 |
+
| FlashInternImage-B |Mask-RCNN | 1x | 50.1 | 44.5 | [config](./detection/configs/coco/mask_rcnn_flash_intern_image_b_fpn_1x_coco.py) | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask_rcnn_flash_internimage_b_fpn_1x_coco.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask_rcnn_flash_internimage_b_fpn_1x_coco.log) |
|
49 |
+
| FlashInternImage-B |Mask-RCNN| 3x | 50.6 | 45.4 | [config](./detection/configs/coco/mask_rcnn_flash_intern_image_b_fpn_3x_coco.py)| [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask_rcnn_flash_internimage_b_fpn_3x_coco.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask_rcnn_flash_internimage_b_fpn_3x_coco.log) |
|
50 |
+
|
51 |
+
| backbone | method| schd | box mAP | mask mAP | Config | Download |
|
52 |
+
| :------------:| :---------: | :---------: | :-----: | :------: | :---: | :---: |
|
53 |
+
| FlashInternImage-L |Cascade Mask R-CNN | 1x | 55.6 | 48.2 | [config](./detection/configs/coco/cascade_flash_intern_image_l_fpn_1x_coco.py) | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/cascade_flash_internimage_l_fpn_1x_coco.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/cascade_flash_internimage_l_fpn_1x_coco.log)
|
54 |
+
| FlashInternImage-L |Cascade Mask R-CNN | 3x | 56.7 | 48.9 | [config](./detection/configs/coco/cascade_flash_intern_image_l_fpn_3x_coco.py) | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/cascade_flash_internimage_l_fpn_3x_coco.pth) |
|
55 |
+
|
56 |
+
| backbone |method | lr type | pretrain | schd | box mAP | Config | Download |
|
57 |
+
| :------------: | :---------: | :---------: |:---------: | :---------: | :-----: | :---: | :-----: |
|
58 |
+
| FlashInternImage-T |DINO| layer-wise lr | ImageNet-1K | 1x | 54.7 | [config](./detection/configs/coco/dino_4scale_flash_internimage_t_1x_coco.py) | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/dino_4scale_flash_internimage_t_1x_coco.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/dino_4scale_flash_internimage_t_1x_coco.json) |
|
59 |
+
| FlashInternImage-S |DINO | layer-wise lr | ImageNet-1K | 1x | 55.3 | [config](./detection/configs/coco/dino_4scale_flash_internimage_s_1x_coco.py) | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/dino_4scale_flash_internimage_s_1x_coco.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/dino_4scale_flash_internimage_s_1x_coco.log) |
|
60 |
+
| FlashInternImage-B |DINO| layer-wise lr | ImageNet-1K | 1x | 56.0 | [config](./detection/configs/coco/dino_4scale_flash_internimage_b_1x_coco.py) | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/dino_4scale_flash_internimage_b_1x_coco.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/dino_4scale_flash_internimage_b_1x_coco.log) |
|
61 |
+
| FlashInternImage-L |DINO | 0.1x backbone lr | ImageNet-22K | 1x | 58.8 | [config](./detection/configs/coco/dino_4scale_flash_internimage_l_1x_coco.py) | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/dino_4scale_flash_internimage_l_1x_coco.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/dino_4scale_flash_internimage_l_1x_coco.log) |
|
62 |
+
|
63 |
+
|
64 |
+
</div>
|
65 |
+
|
66 |
+
</details>
|
67 |
+
|
68 |
+
|
69 |
+
<details>
|
70 |
+
<summary> ADE20K Semantic Segmentation </summary>
|
71 |
+
<br>
|
72 |
+
<div>
|
73 |
+
|
74 |
+
| backbone |method | resolution | mIoU (ss/ms) | Config | Download |
|
75 |
+
|:--------------:|:----------:|:----------:|:-----------:|:-----------:|:----------:
|
76 |
+
| FlashInternImage-T|UperNet | 512x512 | 49.3 / 50.3 | [config](./segmentation/configs/ade20k/upernet_flash_internimage_t_512_160k_ade20k.py) | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/upernet_flash_internimage_t_512_160k_ade20k.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/upernet_flash_internimage_t_512_160k_ade20k.log) |
|
77 |
+
| FlashInternImage-S |UperNet | 512x512 | 50.6 / 51.6 | [config](./segmentation/configs/ade20k/upernet_flash_internimage_s_512_160k_ade20k.py) | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/upernet_flash_internimage_s_512_160k_ade20k.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/upernet_flash_internimage_s_512_160k_ade20k.log) |
|
78 |
+
| FlashInternImage-B |UperNet | 512x512 | 52.0 / 52.6 | [config](./segmentation/configs/ade20k/upernet_flash_internimage_b_512_160k_ade20k.py) | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/upernet_flash_internimage_b_512_160k_ade20k.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/upernet_flash_internimage_s_512_160k_ade20k.log) |
|
79 |
+
| FlashInternImage-L |UperNet | 640x640 | 55.6 / 56.0 | [config](./segmentation/configs/ade20k/upernet_flash_internimage_l_640_160k_ade20k.py)| [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/upernet_flash_internimage_l_640_160k_ade20k.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/upernet_flash_internimage_l_640_160k_ade20k.log) |
|
80 |
+
|
81 |
+
|
82 |
+
| backbone |method | resolution | mIoU (ss) | Config | Download |
|
83 |
+
|:--------------:|:----------:|:----------:|:-----------:|:-----------:|:----------:
|
84 |
+
| FlashInternImage-T |Mask2Former| 512x512 | 51.2 | [config](./segmentation/configs/ade20k/mask2former_flash_internimage_t_512_160k_ade20k_ss.py) | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask2former_flash_internimage_t_512_160k_ade20k_ss.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask2former_flash_internimage_t_512_160k_ade20k_ss.log) |
|
85 |
+
| FlashInternImage-S |Mask2Former| 640x640 | 52.6 | [config](./segmentation/configs/ade20k/mask2former_flash_internimage_s_640_160k_ade20k_ss.py) | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask2former_flash_internimage_s_640_160k_ade20k_ss.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask2former_flash_internimage_s_640_160k_ade20k_ss.log) |
|
86 |
+
| FlashInternImage-B |Mask2Former| 640x640 | 53.4 | [config](./segmentation/configs/ade20k/mask2former_flash_internimage_b_640_160k_ade20k_ss.py) | [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask2former_flash_internimage_b_640_160k_ade20k_ss.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask2former_flash_internimage_b_640_160k_ade20k_ss.log) |
|
87 |
+
| FlashInternImage-L |Mask2Former| 640x640 | 56.7 | [config](./segmentation/configs/ade20k/mask2former_flash_internimage_l_640_160k_ade20k_ss.py)| [ckpt](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask2former_flash_internimage_l_640_160k_ade20k_ss.pth) \| [log](https://huggingface.co/OpenGVLab/DCNv4/resolve/main/mask2former_flash_internimage_l_640_160k_ade20k_ss.log) |
|
88 |
+
|
89 |
+
|
90 |
+
|
91 |
+
</div>
|
92 |
+
|
93 |
+
</details>
|
94 |
+
|
95 |
+
## Citations
|
96 |
+
|
97 |
+
If this work is helpful for your research, please consider citing the following BibTeX entry.
|
98 |
+
|
99 |
+
```bibtex
|
100 |
+
|
101 |
+
@article{xiong2024efficient,
|
102 |
+
title={Efficient Deformable ConvNets: Rethinking Dynamic and Sparse Operator for Vision Applications},
|
103 |
+
author={Yuwen Xiong and Zhiqi Li and Yuntao Chen and Feng Wang and Xizhou Zhu and Jiapeng Luo and Wenhai Wang and Tong Lu and Hongsheng Li and Yu Qiao and Lewei Lu and Jie Zhou and Jifeng Dai},
|
104 |
+
journal={arXiv preprint arXiv:2401.06197},
|
105 |
+
year={2024}
|
106 |
+
}
|
107 |
+
|
108 |
+
@article{wang2022internimage,
|
109 |
+
title={InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions},
|
110 |
+
author={Wang, Wenhai and Dai, Jifeng and Chen, Zhe and Huang, Zhenhang and Li, Zhiqi and Zhu, Xizhou and Hu, Xiaowei and Lu, Tong and Lu, Lewei and Li, Hongsheng and others},
|
111 |
+
journal={arXiv preprint arXiv:2211.05778},
|
112 |
+
year={2022}
|
113 |
+
}
|
114 |
+
|
115 |
+
@inproceedings{zhu2022uni,
|
116 |
+
title={Uni-perceiver: Pre-training unified architecture for generic perception for zero-shot and few-shot tasks},
|
117 |
+
author={Zhu, Xizhou and Zhu, Jinguo and Li, Hao and Wu, Xiaoshi and Li, Hongsheng and Wang, Xiaohua and Dai, Jifeng},
|
118 |
+
booktitle={CVPR},
|
119 |
+
pages={16804--16815},
|
120 |
+
year={2022}
|
121 |
+
}
|
122 |
+
|
123 |
+
@article{zhu2022uni,
|
124 |
+
title={Uni-perceiver-moe: Learning sparse generalist models with conditional moes},
|
125 |
+
author={Zhu, Jinguo and Zhu, Xizhou and Wang, Wenhai and Wang, Xiaohua and Li, Hongsheng and Wang, Xiaogang and Dai, Jifeng},
|
126 |
+
journal={arXiv preprint arXiv:2206.04674},
|
127 |
+
year={2022}
|
128 |
+
}
|
129 |
+
|
130 |
+
@article{li2022uni,
|
131 |
+
title={Uni-Perceiver v2: A Generalist Model for Large-Scale Vision and Vision-Language Tasks},
|
132 |
+
author={Li, Hao and Zhu, Jinguo and Jiang, Xiaohu and Zhu, Xizhou and Li, Hongsheng and Yuan, Chun and Wang, Xiaohua and Qiao, Yu and Wang, Xiaogang and Wang, Wenhai and others},
|
133 |
+
journal={arXiv preprint arXiv:2211.09808},
|
134 |
+
year={2022}
|
135 |
+
}
|
136 |
+
|
137 |
+
@article{yang2022bevformer,
|
138 |
+
title={BEVFormer v2: Adapting Modern Image Backbones to Bird's-Eye-View Recognition via Perspective Supervision},
|
139 |
+
author={Yang, Chenyu and Chen, Yuntao and Tian, Hao and Tao, Chenxin and Zhu, Xizhou and Zhang, Zhaoxiang and Huang, Gao and Li, Hongyang and Qiao, Yu and Lu, Lewei and others},
|
140 |
+
journal={arXiv preprint arXiv:2211.10439},
|
141 |
+
year={2022}
|
142 |
+
}
|
143 |
+
|
144 |
+
@article{su2022towards,
|
145 |
+
title={Towards All-in-one Pre-training via Maximizing Multi-modal Mutual Information},
|
146 |
+
author={Su, Weijie and Zhu, Xizhou and Tao, Chenxin and Lu, Lewei and Li, Bin and Huang, Gao and Qiao, Yu and Wang, Xiaogang and Zhou, Jie and Dai, Jifeng},
|
147 |
+
journal={arXiv preprint arXiv:2211.09807},
|
148 |
+
year={2022}
|
149 |
+
}
|
150 |
+
|
151 |
+
@inproceedings{li2022bevformer,
|
152 |
+
title={Bevformer: Learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers},
|
153 |
+
author={Li, Zhiqi and Wang, Wenhai and Li, Hongyang and Xie, Enze and Sima, Chonghao and Lu, Tong and Qiao, Yu and Dai, Jifeng},
|
154 |
+
booktitle={ECCV},
|
155 |
+
pages={1--18},
|
156 |
+
year={2022},
|
157 |
+
}
|
158 |
+
```
|