Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
inference: false
|
3 |
+
language:
|
4 |
+
- bg
|
5 |
+
license: mit
|
6 |
+
datasets:
|
7 |
+
- oscar
|
8 |
+
- chitanka
|
9 |
+
- wikipedia
|
10 |
+
tags:
|
11 |
+
- torch
|
12 |
+
---
|
13 |
+
|
14 |
+
# ROBERTA BASE (cased) trained on private Bulgarian sentiment-analysis dataset
|
15 |
+
This is a Multilingual Roberta model.
|
16 |
+
|
17 |
+
This model is cased: it does make a difference between bulgarian and Bulgarian.
|
18 |
+
|
19 |
+
### How to use
|
20 |
+
|
21 |
+
Here is how to use this model in PyTorch:
|
22 |
+
|
23 |
+
```python
|
24 |
+
>>> import torch
|
25 |
+
>>> from transformers import AutoModel, AutoTokenizer
|
26 |
+
>>>
|
27 |
+
>>> model_id = "rmihaylov/roberta-base-sentiment-bg"
|
28 |
+
>>> model = AutoModel.from_pretrained(model_id, trust_remote_code=True)
|
29 |
+
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
|
30 |
+
>>>
|
31 |
+
>>> inputs = tokenizer.batch_encode_plus(['Това е умно.', 'Това е тъпо.'], return_tensors='pt')
|
32 |
+
>>> outputs = model(**inputs)
|
33 |
+
>>> torch.softmax(outputs, dim=1).tolist()
|
34 |
+
|
35 |
+
[[0.0004746630438603461, 0.9995253086090088],
|
36 |
+
[0.9986956715583801, 0.0013043134240433574]]
|
37 |
+
```
|