Update README.md
Browse files
README.md
CHANGED
@@ -1,18 +1,27 @@
|
|
1 |
---
|
2 |
license: openrail++
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
-
|
6 |
|
7 |
-
|
8 |
|
9 |
-
|
10 |
|
11 |
-
|
|
|
|
|
|
|
|
|
|
|
12 |
|
13 |
```
|
14 |
from transformers import pipeline
|
15 |
|
16 |
classifier = pipeline("text-classification",
|
17 |
-
model="ukr-detect/
|
18 |
```
|
|
|
1 |
---
|
2 |
license: openrail++
|
3 |
+
datasets:
|
4 |
+
- ukr-detect/ukr-toxicity-dataset
|
5 |
+
language:
|
6 |
+
- uk
|
7 |
---
|
8 |
|
9 |
+
## Binary toxicity classifier for Ukrainian
|
10 |
|
11 |
+
This is the fine-tuned on the [Ukrainian toxicity classification dataset](https://huggingface.co/datasets/ukr-detect/ukr-toxicity-dataset) ["xlm-roberta-base"](https://huggingface.co/xlm-roberta-base) instance.
|
12 |
|
13 |
+
The evaluation metrics for binary toxicity classification on a test set are:
|
14 |
|
15 |
+
**Precision**: 0.9?
|
16 |
+
**Recall**: 0.9?
|
17 |
+
**F1**: 0.9?
|
18 |
+
|
19 |
+
|
20 |
+
## How to use:
|
21 |
|
22 |
```
|
23 |
from transformers import pipeline
|
24 |
|
25 |
classifier = pipeline("text-classification",
|
26 |
+
model="ukr-detect/ukr-toxicity-classifier")
|
27 |
```
|