File size: 5,898 Bytes
1fb1059
c67d6cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1fb1059
c67d6cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1fb1059
c67d6cc
321ddb1
c67d6cc
a5c5e8b
c67d6cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a5c5e8b
 
c67d6cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
---
language: 
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
license: apache-2.0
datasets:
- multi_nli
- xnli
- dbpedia_14
- SetFit/bbc-news
- squad_v2
- race
- knowledgator/events_classification_biotech
- facebook/anli
- SetFit/qnli
metrics:
- accuracy
- f1
pipeline_tag: zero-shot-classification
tags:
- classification
- information-extraction
- zero-shot
---

**comprehend-it-multilang-base**

This is an encoder-decoder model based on [mT5-base](https://huggingface.co/google/mt5-base) that was trained on multi-language natural language inference datasets as well as on multiple text classification datasets. 

The model demonstrates a better contextual understanding of text and verbalized label because both inputs are encoded by different parts of a model - encoder and decoder respectively.

The zero-shot classifier supports nearly 100 languages and can work in both directions, meaning that labels and text can belong to different languages.

#### Install the neccessary libraries before using it
Because of the different model architecture, we can't use transformers' "zero-shot-classification" pipeline. For that, we developed a special library called [LiqFit](https://github.com/Knowledgator/LiqFit/tree/main). 
If you haven't install sentencepiece library you need to install it as well to use T5 tokenizers.

```bash
pip install liqfit sentencepiece
```

#### With the LiqFit pipeline

The model can be loaded with the `zero-shot-classification` pipeline like so:

```python
from liqfit.pipeline import ZeroShotClassificationPipeline
from liqfit.models import T5ForZeroShotClassification
from transformers import T5Tokenizer

model = T5ForZeroShotClassification.from_pretrained('knowledgator/comprehend_it-multilingual-t5-base')
tokenizer = T5Tokenizer.from_pretrained('knowledgator/comprehend_it-multilingual-t5-base')
classifier = ZeroShotClassificationPipeline(model=model, tokenizer=tokenizer,
                                                      hypothesis_template = '{}', encoder_decoder = True)
```

You can then use this pipeline to classify sequences into any of the class names you specify.

```python
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels, multi_label=False)
{'sequence': 'one day I will see the world',
 'labels': ['travel', 'cooking', 'dancing'],
 'scores': [0.7350383996963501, 0.1484801471233368, 0.1164814680814743]}
```

Amoung Enlish you can use the model for many other languages, such as Ukrainian:

```python
sequence_to_classify = "Одного дня я побачу цей світ."
candidate_labels = ['подорож', 'кулінарія', 'танці']
classifier(sequence_to_classify, candidate_labels, multi_label=False)
{'sequence': 'Одного дня я побачу цей світ.',
 'labels': ['подорож', 'кулінарія', 'танці'],
 'scores': [0.6393420696258545, 0.2657214105129242, 0.09493650496006012]}
```

The model works even if labels and text are different languages:
```python
sequence_to_classify = "Одного дня я побачу цей світ"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels, multi_label=False)
{'sequence': 'Одного дня я побачу цей світ',
 'labels': ['travel', 'cooking', 'dancing'],
 'scores': [0.7676175236701965, 0.15484870970249176, 0.07753374427556992]}
```

### Benchmarking
Below, you can see the F1 score on several text classification datasets. All tested models were not fine-tuned on those datasets and were tested in a zero-shot setting.
| Model                       | IMDB | AG_NEWS | Emotions |
|-----------------------------|------|---------|----------|
| [Bart-large-mnli (407 M)](https://huggingface.co/facebook/bart-large-mnli)      | 0.89 | 0.6887  | 0.3765   |
| [Deberta-base-v3 (184 M)](https://huggingface.co/cross-encoder/nli-deberta-v3-base)      | 0.85 | 0.6455  | 0.5095   |
| [Comprehendo (184M)](https://huggingface.co/knowledgator/comprehend_it-base)           | 0.90 | 0.7982  | 0.5660   |
| [Comprehendo-multi-lang (390M)](https://huggingface.co/knowledgator/comprehend-it-multilang-base)           | 0.88 | 0.8372  | -  |
| SetFit [BAAI/bge-small-en-v1.5 (33.4M)](https://huggingface.co/BAAI/bge-small-en-v1.5) | 0.86 | 0.5636 | 0.5754 |

### Future reading
Check our blogpost - ["The new milestone in zero-shot capabilities (it’s not Generative AI)."](https://medium.com/p/9b5a081fbf27), where we highlighted possible use-cases of the model and why next-token prediction is not the only way to achive amazing zero-shot capabilites.
While most of the AI industry is focused on generative AI and decoder-based models, we are committed to developing encoder-based models.
We aim to achieve the same level of generalization for such models as their decoder brothers. Encoders have several wonderful properties, such as bidirectional attention, and they are the best choice for many information extraction tasks in terms of efficiency and controllability.

### Feedback
We value your input! Share your feedback and suggestions to help us improve our models.
Fill out the feedback [form](https://forms.gle/5CPFFuLzNWznjcpL7)

### Join Our Discord
Connect with our community on Discord for news, support, and discussion about our models.
Join [Discord](https://discord.gg/dkyeAgs9DG)