JustinLin610
commited on
Commit
•
1ee8c97
1
Parent(s):
629554b
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,155 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
|
5 |
+
|
6 |
+
|
7 |
+
# Chinese-CLIP-Base
|
8 |
+
|
9 |
+
## Introduction
|
10 |
+
This is the large-version of the Chinese CLIP, with ViT-L/14 as the image encoder and RoBERTa-wwm-base as the text encoder. Chinese CLIP is a simple implementation of CLIP on a large-scale dataset of around 200 million Chinese image-text pairs. For more details, please refer to our technical report https://arxiv.org/abs/2211.01335 and our official github repo https://github.com/OFA-Sys/Chinese-CLIP
|
11 |
+
|
12 |
+
## Use with the official API
|
13 |
+
We provide a simple code snippet to show how to use the API for Chinese-CLIP. For starters, please install cn_clip:
|
14 |
+
```bash
|
15 |
+
# to install the latest stable release
|
16 |
+
pip install cn_clip
|
17 |
+
|
18 |
+
# or install from source code
|
19 |
+
cd Chinese-CLIP
|
20 |
+
pip install -e .
|
21 |
+
```
|
22 |
+
After installation, use Chinese CLIP as shown below:
|
23 |
+
```python
|
24 |
+
import torch
|
25 |
+
from PIL import Image
|
26 |
+
|
27 |
+
import cn_clip.clip as clip
|
28 |
+
from cn_clip.clip import load_from_name, available_models
|
29 |
+
print("Available models:", available_models())
|
30 |
+
# Available models: ['ViT-B-16', 'ViT-L-14', 'ViT-L-14-336', 'ViT-H-14', 'RN50']
|
31 |
+
|
32 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
33 |
+
model, preprocess = load_from_name("ViT-B-16", device=device, download_root='./')
|
34 |
+
model.eval()
|
35 |
+
image = preprocess(Image.open("examples/pokemon.jpeg")).unsqueeze(0).to(device)
|
36 |
+
text = clip.tokenize(["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"]).to(device)
|
37 |
+
|
38 |
+
with torch.no_grad():
|
39 |
+
image_features = model.encode_image(image)
|
40 |
+
text_features = model.encode_text(text)
|
41 |
+
# Normalize the features. Please use the normalized features for downstream tasks.
|
42 |
+
image_features /= image_features.norm(dim=-1, keepdim=True)
|
43 |
+
text_features /= text_features.norm(dim=-1, keepdim=True)
|
44 |
+
|
45 |
+
logits_per_image, logits_per_text = model.get_similarity(image, text)
|
46 |
+
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
|
47 |
+
|
48 |
+
print("Label probs:", probs) # [[1.268734e-03 5.436878e-02 6.795761e-04 9.436829e-01]]
|
49 |
+
```
|
50 |
+
|
51 |
+
However, if you are not satisfied with only using the API, feel free to check our github repo https://github.com/OFA-Sys/Chinese-CLIP for more details about training and inference.
|
52 |
+
<br><br>
|
53 |
+
|
54 |
+
## Results
|
55 |
+
**MUGE Text-to-Image Retrieval**:
|
56 |
+
<table border="1" width="100%">
|
57 |
+
<tr align="center">
|
58 |
+
<th>Setup</th><th colspan="4">Zero-shot</th><th colspan="4">Finetune</th>
|
59 |
+
</tr>
|
60 |
+
<tr align="center">
|
61 |
+
<td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>MR</td><td>R@1</td><td>R@5</td><td>R@10</td><td>MR</td>
|
62 |
+
</tr>
|
63 |
+
<tr align="center">
|
64 |
+
<td width="120%">Wukong</td><td>42.7</td><td>69.0</td><td>78.0</td><td>63.2</td><td>52.7</td><td>77.9</td><td>85.6</td><td>72.1</td>
|
65 |
+
</tr>
|
66 |
+
<tr align="center">
|
67 |
+
<td width="120%">R2D2</td><td>49.5</td><td>75.7</td><td>83.2</td><td>69.5</td><td>60.1</td><td>82.9</td><td>89.4</td><td>77.5</td>
|
68 |
+
</tr>
|
69 |
+
<tr align="center">
|
70 |
+
<td width="120%">CN-CLIP</td><td>63.0</td><td>84.1</td><td>89.2</td><td>78.8</td><td>68.9</td><td>88.7</td><td>93.1</td><td>83.6</td>
|
71 |
+
</tr>
|
72 |
+
</table>
|
73 |
+
<br>
|
74 |
+
|
75 |
+
**Flickr30K-CN Retrieval**:
|
76 |
+
<table border="1" width="120%">
|
77 |
+
<tr align="center">
|
78 |
+
<th>Task</th><th colspan="6">Text-to-Image</th><th colspan="6">Image-to-Text</th>
|
79 |
+
</tr>
|
80 |
+
<tr align="center">
|
81 |
+
<th>Setup</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th>
|
82 |
+
</tr>
|
83 |
+
<tr align="center">
|
84 |
+
<td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td>
|
85 |
+
</tr>
|
86 |
+
<tr align="center">
|
87 |
+
<td width="120%">Wukong</td><td>51.7</td><td>78.9</td><td>86.3</td><td>77.4</td><td>94.5</td><td>97.0</td><td>76.1</td><td>94.8</td><td>97.5</td><td>92.7</td><td>99.1</td><td>99.6</td>
|
88 |
+
</tr>
|
89 |
+
<tr align="center">
|
90 |
+
<td width="120%">R2D2</td><td>60.9</td><td>86.8</td><td>92.7</td><td>84.4</td><td>96.7</td><td>98.4</td><td>77.6</td><td>96.7</td><td>98.9</td><td>95.6</td><td>99.8</td><td>100.0</td>
|
91 |
+
</tr>
|
92 |
+
<tr align="center">
|
93 |
+
<td width="120%">CN-CLIP</td><td>71.2</td><td>91.4</td><td>95.5</td><td>83.8</td><td>96.9</td><td>98.6</td><td>81.6</td><td>97.5</td><td>98.8</td><td>95.3</td><td>99.7</td><td>100.0</td>
|
94 |
+
</tr>
|
95 |
+
</table>
|
96 |
+
<br>
|
97 |
+
|
98 |
+
**COCO-CN Retrieval**:
|
99 |
+
<table border="1" width="100%">
|
100 |
+
<tr align="center">
|
101 |
+
<th>Task</th><th colspan="6">Text-to-Image</th><th colspan="6">Image-to-Text</th>
|
102 |
+
</tr>
|
103 |
+
<tr align="center">
|
104 |
+
<th>Setup</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th>
|
105 |
+
</tr>
|
106 |
+
<tr align="center">
|
107 |
+
<td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td>
|
108 |
+
</tr>
|
109 |
+
<tr align="center">
|
110 |
+
<td width="120%">Wukong</td><td>53.4</td><td>80.2</td><td>90.1</td><td>74.0</td><td>94.4</td><td>98.1</td><td>55.2</td><td>81.0</td><td>90.6</td><td>73.3</td><td>94.0</td><td>98.0</td>
|
111 |
+
</tr>
|
112 |
+
<tr align="center">
|
113 |
+
<td width="120%">R2D2</td><td>56.4</td><td>85.0</td><td>93.1</td><td>79.1</td><td>96.5</td><td>98.9</td><td>63.3</td><td>89.3</td><td>95.7</td><td>79.3</td><td>97.1</td><td>98.7</td>
|
114 |
+
</tr>
|
115 |
+
<tr align="center">
|
116 |
+
<td width="120%">CN-CLIP</td><td>69.2</td><td>89.9</td><td>96.1</td><td>81.5</td><td>96.9</td><td>99.1</td><td>63.0</td><td>86.6</td><td>92.9</td><td>83.5</td><td>97.3</td><td>99.2</td>
|
117 |
+
</tr>
|
118 |
+
</table>
|
119 |
+
<br>
|
120 |
+
|
121 |
+
**Zero-shot Image Classification**:
|
122 |
+
<table border="1" width="100%">
|
123 |
+
<tr align="center">
|
124 |
+
<th>Task</th><th>CIFAR10</th><th>CIFAR100</th><th>DTD</th><th>EuroSAT</th><th>FER</th><th>FGVC</th><th>KITTI</th><th>MNIST</th><th>PC</th><th>VOC</th>
|
125 |
+
</tr>
|
126 |
+
<tr align="center">
|
127 |
+
<td width="150%">GIT</td><td>88.5</td><td>61.1</td><td>42.9</td><td>43.4</td><td>41.4</td><td>6.7</td><td>22.1</td><td>68.9</td><td>50.0</td><td>80.2</td>
|
128 |
+
</tr>
|
129 |
+
<tr align="center">
|
130 |
+
<td width="150%">ALIGN</td><td>94.9</td><td>76.8</td><td>66.1</td><td>52.1</td><td>50.8</td><td>25.0</td><td>41.2</td><td>74.0</td><td>55.2</td><td>83.0</td>
|
131 |
+
</tr>
|
132 |
+
<tr align="center">
|
133 |
+
<td width="150%">CLIP</td><td>94.9</td><td>77.0</td><td>56.0</td><td>63.0</td><td>48.3</td><td>33.3</td><td>11.5</td><td>79.0</td><td>62.3</td><td>84.0</td>
|
134 |
+
</tr>
|
135 |
+
<tr align="center">
|
136 |
+
<td width="150%">Wukong</td><td>95.4</td><td>77.1</td><td>40.9</td><td>50.3</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td>
|
137 |
+
</tr>
|
138 |
+
<tr align="center">
|
139 |
+
<td width="150%">CN-CLIP</td><td>96.0</td><td>79.7</td><td>51.2</td><td>52.0</td><td>55.1</td><td>26.2</td><td>49.9</td><td>79.4</td><td>63.5</td><td>84.9</td>
|
140 |
+
</tr>
|
141 |
+
</table>
|
142 |
+
<br>
|
143 |
+
|
144 |
+
## Citation
|
145 |
+
If you find Chinese CLIP helpful, feel free to cite our paper. Thanks for your support!
|
146 |
+
|
147 |
+
```
|
148 |
+
@article{chinese-clip,
|
149 |
+
title={Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese},
|
150 |
+
author={Yang, An and Pan, Junshu and Lin, Junyang and Men, Rui and Zhang, Yichang and Zhou, Jingren and Zhou, Chang},
|
151 |
+
journal={arXiv preprint arXiv:2211.01335},
|
152 |
+
year={2022}
|
153 |
+
}
|
154 |
+
```
|
155 |
+
<br>
|