ai-forever
commited on
Commit
•
06f1f6e
1
Parent(s):
6191103
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# ruclip-vit-base-patch32-384
|
2 |
+
|
3 |
+
**RuCLIP** (**Ru**ssian **C**ontrastive **L**anguage–**I**mage **P**retraining) is a multimodal model
|
4 |
+
for obtaining images and text similarities and rearranging captions and pictures.
|
5 |
+
RuCLIP builds on a large body of work on zero-shot transfer, computer vision, natural language processing and
|
6 |
+
multimodal learning.
|
7 |
+
|
8 |
+
Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams.
|
9 |
+
* Task: `text ranking`; `image ranking`; `zero-shot image classification`;
|
10 |
+
* Type: `encoder`
|
11 |
+
* Num Parameters: `150M`
|
12 |
+
* Training Data Volume: `240 million text-image pairs`
|
13 |
+
* Language: `Russian`
|
14 |
+
* Context Length: `77`
|
15 |
+
* Transformer Layers: `12`
|
16 |
+
* Transformer Width: `512`
|
17 |
+
* Transformer Heads: `8`
|
18 |
+
* Image Size: `384`
|
19 |
+
* Vision Layers: `12`
|
20 |
+
* Vision Width: `768`
|
21 |
+
* Vision Patch Size: `32`
|
22 |
+
|
23 |
+
## Usage [Github](https://github.com/sberbank-ai/ru-clip)
|
24 |
+
|
25 |
+
```
|
26 |
+
pip install ruclip
|
27 |
+
```
|
28 |
+
|
29 |
+
```python
|
30 |
+
clip, processor = ruclip.load("ruclip-vit-base-patch32-384", device="cuda")
|
31 |
+
```
|
32 |
+
|
33 |
+
|
34 |
+
## Performance
|
35 |
+
We have evaluated the performance on the following datasets:
|
36 |
+
|
37 |
+
| Dataset | Metric Name | Metric Result |
|
38 |
+
|:--------------|:---------------|:----------------------------|
|
39 |
+
| Food101 | acc | 0.642 |
|
40 |
+
| CIFAR10 | acc | 0.862 |
|
41 |
+
| CIFAR100 | acc | 0.529 |
|
42 |
+
| Birdsnap | acc | 0.161 |
|
43 |
+
| SUN397 | acc | 0.510 |
|
44 |
+
| Stanford Cars | acc | 0.572 |
|
45 |
+
| DTD | acc | 0.390 |
|
46 |
+
| MNIST | acc | 0.404 |
|
47 |
+
| STL10 | acc | 0.946 |
|
48 |
+
| PCam | acc | 0.506 |
|
49 |
+
| CLEVR | acc | 0.188 |
|
50 |
+
| Rendered SST2 | acc | 0.508 |
|
51 |
+
| ImageNet | acc | 0.451 |
|
52 |
+
| FGVC Aircraft | mean-per-class | 0.053 |
|
53 |
+
| Oxford Pets | mean-per-class | 0.587 |
|
54 |
+
| Caltech101 | mean-per-class | 0.834 |
|
55 |
+
| Flowers102 | mean-per-class | 0.449 |
|
56 |
+
| HatefulMemes | roc-auc | 0.537 |
|
57 |
+
|
58 |
+
|
59 |
+
|
60 |
+
|
61 |
+
# Authors
|
62 |
+
|
63 |
+
+ Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov)
|
64 |
+
+ Daniil Chesakov: [Github](https://github.com/Danyache)
|
65 |
+
+ Denis Dimitrov: [Github](https://github.com/denndimitrov)
|
66 |
+
+ Igor Pavlov: [Github](https://github.com/boomb0om)
|