Update README.md
Browse files
README.md
CHANGED
@@ -10,42 +10,36 @@ license: apache-2.0
|
|
10 |
This is the large-version of the Chinese CLIP, with ViT-L/14 as the image encoder and RoBERTa-wwm-base as the text encoder. Chinese CLIP is a simple implementation of CLIP on a large-scale dataset of around 200 million Chinese image-text pairs. For more details, please refer to our technical report https://arxiv.org/abs/2211.01335 and our official github repo https://github.com/OFA-Sys/Chinese-CLIP
|
11 |
|
12 |
## Use with the official API
|
13 |
-
We provide a simple code snippet to show how to use the API
|
14 |
-
|
15 |
-
# to install the latest stable release
|
16 |
-
pip install cn_clip
|
17 |
-
|
18 |
-
# or install from source code
|
19 |
-
cd Chinese-CLIP
|
20 |
-
pip install -e .
|
21 |
-
```
|
22 |
-
After installation, use Chinese CLIP as shown below:
|
23 |
```python
|
24 |
-
import torch
|
25 |
from PIL import Image
|
26 |
-
|
27 |
-
import
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
|
|
|
|
|
|
49 |
```
|
50 |
|
51 |
However, if you are not satisfied with only using the API, feel free to check our github repo https://github.com/OFA-Sys/Chinese-CLIP for more details about training and inference.
|
|
|
10 |
This is the large-version of the Chinese CLIP, with ViT-L/14 as the image encoder and RoBERTa-wwm-base as the text encoder. Chinese CLIP is a simple implementation of CLIP on a large-scale dataset of around 200 million Chinese image-text pairs. For more details, please refer to our technical report https://arxiv.org/abs/2211.01335 and our official github repo https://github.com/OFA-Sys/Chinese-CLIP
|
11 |
|
12 |
## Use with the official API
|
13 |
+
We provide a simple code snippet to show how to use the API of Chinese-CLIP to compute the image & text embeddings and similarities.
|
14 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
```python
|
|
|
16 |
from PIL import Image
|
17 |
+
import requests
|
18 |
+
from transformers import ChineseCLIPProcessor, ChineseCLIPModel
|
19 |
+
|
20 |
+
model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-large-patch14")
|
21 |
+
processor = ChineseCLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-large-patch14")
|
22 |
+
|
23 |
+
url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg"
|
24 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
25 |
+
# Squirtle, Bulbasaur, Charmander, Pikachu in English
|
26 |
+
texts = ["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"]
|
27 |
+
|
28 |
+
# compute image feature
|
29 |
+
inputs = processor(images=image, return_tensors="pt")
|
30 |
+
image_features = model.get_image_features(**inputs)
|
31 |
+
image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize
|
32 |
+
|
33 |
+
# compute text features
|
34 |
+
inputs = processor(text=texts, padding=True, return_tensors="pt")
|
35 |
+
text_features = model.get_text_features(**inputs)
|
36 |
+
text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize
|
37 |
+
|
38 |
+
# compute image-text similarity scores
|
39 |
+
inputs = processor(text=texts, images=image, return_tensors="pt", padding=True)
|
40 |
+
outputs = model(**inputs)
|
41 |
+
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
|
42 |
+
probs = logits_per_image.softmax(dim=1) # probs: [[0.0066, 0.0211, 0.0031, 0.9692]]
|
43 |
```
|
44 |
|
45 |
However, if you are not satisfied with only using the API, feel free to check our github repo https://github.com/OFA-Sys/Chinese-CLIP for more details about training and inference.
|