File size: 2,717 Bytes
07373d7
 
 
3811fa8
 
 
07373d7
747c0b9
 
 
 
 
 
 
 
87059f2
 
 
 
 
 
747c0b9
 
 
 
 
 
 
87059f2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
07373d7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
pipeline_tag: text-classification
license: apache-2.0
tags:
- toxicity
- toxic detection
---
This ONNX model is a dynamic quantized for the original model: [https://huggingface.co/unitary/multilingual-toxic-xlm-roberta](https://huggingface.co/unitary/multilingual-toxic-xlm-roberta)

### Usage
Using `pipeline` from the `optimum` library
```
from optimum.pipelines import pipeline as pipeline_onnx

quantized_pipeline = pipeline_onnx(
    "text-classification",
    model="hoan/multilingual-toxic-xlm-roberta-dynamic-quantized",
    accelerator="ort",
    top_k=None,
    function_to_apply="sigmoid"
)

text = """Artificial intelligence (AI), frequently depicted in mainstream media as a harbinger of both groundbreaking innovation and understandable concern, has seamlessly permeated and embedded itself within a multitude of diverse sectors that constitute the intricate tapestry of our contemporary society. This relentless integration spans a wide spectrum, extending from the realms of healthcare, where AI is catalyzing transformative breakthroughs in disease diagnosis, treatment planning, and medical research, to the intricate domain of finance, where algorithms are reshaping the landscape of investment strategies, risk assessment, and market predictions."""
result = quantized_pipeline(text)
```

The result is:
```
[[{'label': 'LABEL_0', 'score': 0.0004449746338650584},
  {'label': 'LABEL_7', 'score': 0.00035187375033274293},
  {'label': 'LABEL_8', 'score': 0.00024698078050278127},
  {'label': 'LABEL_4', 'score': 0.00019323475135024637},
  {'label': 'LABEL_14', 'score': 0.00015645574603695422},
  {'label': 'LABEL_10', 'score': 0.0001484356907894835},
  {'label': 'LABEL_2', 'score': 0.0001337601279374212},
  {'label': 'LABEL_13', 'score': 0.00011757002357626334},
  {'label': 'LABEL_3', 'score': 9.490883530816063e-05},
  {'label': 'LABEL_12', 'score': 9.136357402894646e-05},
  {'label': 'LABEL_15', 'score': 5.817503551952541e-05},
  {'label': 'LABEL_9', 'score': 5.3772881074110046e-05},
  {'label': 'LABEL_11', 'score': 3.9219678001245484e-05},
  {'label': 'LABEL_5', 'score': 3.468171780696139e-05},
  {'label': 'LABEL_6', 'score': 2.4815808501443826e-05},
  {'label': 'LABEL_1', 'score': 2.0749821487697773e-05}]]
```

The mapping for the labels is:
```
{'LABEL_0': 'toxicity',
 'LABEL_1': 'severe_toxicity',
 'LABEL_2': 'obscene',
 'LABEL_3': 'identity_attack',
 'LABEL_4': 'insult',
 'LABEL_5': 'threat',
 'LABEL_6': 'sexual_explicit',
 'LABEL_7': 'male',
 'LABEL_8': 'female',
 'LABEL_9': 'homosexual_gay_or_lesbian',
 'LABEL_10': 'christian',
 'LABEL_11': 'jewish',
 'LABEL_12': 'muslim',
 'LABEL_13': 'black',
 'LABEL_14': 'white',
 'LABEL_15': 'psychiatric_or_mental_illness'}
```