uvegesistvan
commited on
Commit
•
c278f53
1
Parent(s):
507e934
Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ widget:
|
|
25 |
|
26 |
## Model description
|
27 |
|
28 |
-
Cased fine-tuned XLM-RoBERTa-large model for Hungarian, trained
|
29 |
|
30 |
## Intended uses & limitations
|
31 |
|
@@ -56,10 +56,14 @@ tokenizer = AutoTokenizer.from_pretrained("uvegesistvan/Hun_RoBERTa_large_Plain"
|
|
56 |
model = AutoModelForSequenceClassification.from_pretrained("uvegesistvan/Hun_RoBERTa_large_Plain")
|
57 |
```
|
58 |
|
59 |
-
# Citation
|
|
|
|
|
|
|
60 |
@PhDThesis{ Uveges:2024,
|
61 |
author = {{"U}veges, Istv{\'a}n},
|
62 |
title = {K{\"o}z{\'e}rthet{\"o} és automatiz{\'a}ci{\'o} - k{\'i}s{\'e}rletek a jog, term{\'e}szetesnyelv-feldolgoz{\'a}s {\'e}s informatika hat{\'a}r{\'a}n.},
|
63 |
year = {2024},
|
64 |
school = {Szegedi Tudom{\'a}nyegyetem}
|
65 |
-
}
|
|
|
|
25 |
|
26 |
## Model description
|
27 |
|
28 |
+
Cased fine-tuned XLM-RoBERTa-large model for Hungarian, trained on a dataset (~13k sentences) provided by National Tax and Customs Administration - Hungary (NAV): Public Accessibilty Programme.
|
29 |
|
30 |
## Intended uses & limitations
|
31 |
|
|
|
56 |
model = AutoModelForSequenceClassification.from_pretrained("uvegesistvan/Hun_RoBERTa_large_Plain")
|
57 |
```
|
58 |
|
59 |
+
# Citation
|
60 |
+
|
61 |
+
Bibtex:
|
62 |
+
```bibtex
|
63 |
@PhDThesis{ Uveges:2024,
|
64 |
author = {{"U}veges, Istv{\'a}n},
|
65 |
title = {K{\"o}z{\'e}rthet{\"o} és automatiz{\'a}ci{\'o} - k{\'i}s{\'e}rletek a jog, term{\'e}szetesnyelv-feldolgoz{\'a}s {\'e}s informatika hat{\'a}r{\'a}n.},
|
66 |
year = {2024},
|
67 |
school = {Szegedi Tudom{\'a}nyegyetem}
|
68 |
+
}
|
69 |
+
```
|