ashvardanian commited on
Commit
214f405
1 Parent(s): 1b11455

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: feature-extraction
4
+ tags:
5
+ - clip
6
+ - vision
7
+ datasets:
8
+ - Ziyang/yfcc15m
9
+ - conceptual_captions
10
+ ---
11
+ <h1 align="center">UForm</h1>
12
+ <h3 align="center">
13
+ Multi-Modal Inference Library<br/>
14
+ For Semantic Search Applications<br/>
15
+ </h3>
16
+
17
+ ---
18
+
19
+ UForm is a Multi-Modal Modal Inference package, designed to encode Multi-Lingual Texts, Images, and, soon, Audio, Video, and Documents, into a shared vector space!
20
+
21
+ This is model card of the __English only model__ with:
22
+
23
+ * 12 layers BERT (6 layers for unimodal encoding and rest layers for multimodal encoding)
24
+ * ViT-L/14 (image resolution is 224x224)
25
+ * Multiple embedding sizes: 64, 256, 512, 768
26
+
27
+
28
+ If you need Multilingual model, check [this](https://huggingface.co/unum-cloud/uform-vl-multilingual).
29
+
30
+ ## Evaluation
31
+
32
+ The following metrics were obtained with multimodal re-ranking (text-to-image retrieval):
33
+
34
+ | Dataset |Recall@1 | Recall@5 | Recall@10 |
35
+ | :------ | ------: | --------: | --------: |
36
+ | Zero-Shot Flickr | 0.693 | 0.875 | 0.923 |
37
+ | Zero-Shot MS-COCO | 0.382 | 0.617 | 0.728 |
38
+
39
+ ImageNet-Top1: 0.518 \
40
+ ImageNet-Top5: 0.756
41
+
42
+ ## Installation
43
+
44
+ ```bash
45
+ pip install uform[torch]
46
+ ```
47
+
48
+ ## Usage
49
+
50
+ To load the model:
51
+
52
+ ```python
53
+ import uform
54
+
55
+ model, processor = uform.get_model('unum-cloud/uform-vl-english-large')
56
+ ```
57
+
58
+ To encode data:
59
+
60
+ ```python
61
+ from PIL import Image
62
+
63
+ text = 'a small red panda in a zoo'
64
+ image = Image.open('red_panda.jpg')
65
+
66
+ image_data = processor.preprocess_image(image)
67
+ text_data = processor.preprocess_text(text)
68
+
69
+ image_features, image_embedding = model.encode_image(image_data, return_features=True)
70
+ text_features, text_embedding = model.encode_text(text_data, return_features=True)
71
+ joint_embedding = model.encode_multimodal(image=image_data, text=text_data)
72
+ ```
73
+
74
+ To get features:
75
+
76
+ ```python
77
+ image_features, image_embedding = model.encode_image(image_data, return_features=True)
78
+ text_features, text_embedding = model.encode_text(text_data, return_features=True)
79
+ ```
80
+
81
+ These features can later be used to produce joint multimodal encodings faster, as the first layers of the transformer can be skipped:
82
+
83
+ ```python
84
+ joint_embedding = model.encode_multimodal(
85
+ image_features=image_features,
86
+ text_features=text_features,
87
+ attention_mask=text_data['attention_mask']
88
+ )
89
+ ```
90
+
91
+ There are two options to calculate semantic compatibility between an image and a text: [Cosine Similarity](#cosine-similarity) and [Matching Score](#matching-score).
92
+
93
+ ### Cosine Similarity
94
+
95
+ ```python
96
+ import torch.nn.functional as F
97
+
98
+ similarity = F.cosine_similarity(image_embedding, text_embedding)
99
+ ```
100
+
101
+ The `similarity` will belong to the `[-1, 1]` range, `1` meaning the absolute match.
102
+
103
+ __Pros__:
104
+
105
+ - Computationally cheap.
106
+ - Only unimodal embeddings are required, unimodal encoding is faster than joint encoding.
107
+ - Suitable for retrieval in large collections.
108
+
109
+ __Cons__:
110
+
111
+ - Takes into account only coarse-grained features.
112
+
113
+
114
+ ### Matching Score
115
+
116
+ Unlike cosine similarity, unimodal embedding are not enough.
117
+ Joint embedding will be needed and the resulting `score` will belong to the `[0, 1]` range, `1` meaning the absolute match.
118
+
119
+ ```python
120
+ score = model.get_matching_scores(joint_embedding)
121
+ ```
122
+
123
+ __Pros__:
124
+
125
+ - Joint embedding captures fine-grained features.
126
+ - Suitable for re-ranking – sorting retrieval result.
127
+
128
+ __Cons__:
129
+
130
+ - Resource-intensive.
131
+ - Not suitable for retrieval in large collections.
config.json ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "text_encoder": {
3
+ "tokenizer_class": "bert",
4
+ "model_type": "bert",
5
+ "dim": 512,
6
+ "context_dim": 1024,
7
+ "vocab_size": 30522,
8
+ "padding_idx": 0,
9
+ "num_layers": 12,
10
+ "num_heads": 8,
11
+ "embedding_dim": 768,
12
+ "multimodal_layers_ids": [
13
+ 6,
14
+ 7,
15
+ 8,
16
+ 9,
17
+ 10,
18
+ 11
19
+ ],
20
+ "head_one_neuron": false,
21
+ "pooling": "cls",
22
+ "max_position_embeddings": 64,
23
+ "dropout_prob": 0.1
24
+ },
25
+ "image_encoder": {
26
+ "normalization_means": [
27
+ 0.48145466,
28
+ 0.4578275,
29
+ 0.40821073
30
+ ],
31
+ "normalization_deviations": [
32
+ 0.26862954,
33
+ 0.26130258,
34
+ 0.27577711
35
+ ],
36
+ "dim": 1024,
37
+ "patch_size": 14,
38
+ "image_size": 224,
39
+ "num_layers": 24,
40
+ "num_heads": 16,
41
+ "embedding_dim": 768,
42
+ "pooling": "cls",
43
+ "num_reg_tokens": 4
44
+ }
45
+ }
image_encoder.mlpackage/Data/com.apple.CoreML/model.mlmodel ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9e7674bf0b3cb4888c07f856afdf4a2c726d9897abea51d2dc09bbf891f2b87
3
+ size 219785
image_encoder.mlpackage/Data/com.apple.CoreML/weights/weight.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c7e5554fd2b3c1a28c338d3dad05ec621660d63c2c10e2d12851276c76c2fe2
3
+ size 1216100992
image_encoder.mlpackage/Manifest.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "fileFormatVersion": "1.0.0",
3
+ "itemInfoEntries": {
4
+ "5D0A1DEC-240F-4858-9B65-E24C5ECD8FA0": {
5
+ "author": "com.apple.CoreML",
6
+ "description": "CoreML Model Specification",
7
+ "name": "model.mlmodel",
8
+ "path": "com.apple.CoreML/model.mlmodel"
9
+ },
10
+ "B69ED582-7925-4F67-8551-49D55D91B556": {
11
+ "author": "com.apple.CoreML",
12
+ "description": "CoreML Model Weights",
13
+ "name": "weights",
14
+ "path": "com.apple.CoreML/weights"
15
+ }
16
+ },
17
+ "rootModelIdentifier": "5D0A1DEC-240F-4858-9B65-E24C5ECD8FA0"
18
+ }
image_encoder.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d811e8175034ff69bb81af939eb9de9e43a190e51418d64c3a80d1014f9bea04
3
+ size 306693409
image_encoder.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7eb875122465970de652c96b1c5d10f3d8a92ead5e986d8fef139c6be4a526a1
3
+ size 608182826
image_encoder.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b351cecc9bd6a336ca0691f89e3d4356dd8568d7425bdd6a23deff1f680dd987
3
+ size 608077816
text_encoder.mlpackage/Data/com.apple.CoreML/model.mlmodel ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad16992287a7c84131e8c2a8c19c44030f10379726581dab51a3dc9905306341
3
+ size 56199
text_encoder.mlpackage/Data/com.apple.CoreML/weights/weight.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b666e07fb083deef657480a19e6b8c8b323a01a001e3d2913c9566cc5a19eab
3
+ size 139883968
text_encoder.mlpackage/Manifest.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "fileFormatVersion": "1.0.0",
3
+ "itemInfoEntries": {
4
+ "88941D4F-6EE3-4492-92F1-4714D520C2B2": {
5
+ "author": "com.apple.CoreML",
6
+ "description": "CoreML Model Weights",
7
+ "name": "weights",
8
+ "path": "com.apple.CoreML/weights"
9
+ },
10
+ "C5FE30C0-9C0C-4F26-96FC-29FA64D92CE7": {
11
+ "author": "com.apple.CoreML",
12
+ "description": "CoreML Model Specification",
13
+ "name": "model.mlmodel",
14
+ "path": "com.apple.CoreML/model.mlmodel"
15
+ }
16
+ },
17
+ "rootModelIdentifier": "C5FE30C0-9C0C-4F26-96FC-29FA64D92CE7"
18
+ }
text_encoder.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57df732e74740b70ecf1788478d7936bdffae3cec4eb997bfb6c7a59aa06f6af
3
+ size 35272692
text_encoder.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b7ab3f8e506099dc62cadb85705bf8b91bfbc000474858448173d27ff7dab33
3
+ size 121525138
text_encoder.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:893185006938771e00437e95a6a40504e327ffdca1b7bb4c6bc78024e6c09881
3
+ size 121461852
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff