Model description
Cased fine-tuned BERT model for Hungarian, trained on a dataset provided by National Tax and Customs Administration - Hungary (NAV): Public Accessibilty Programme.
Intended uses & limitations
The model can be used as any other (cased) BERT model. It has been tested recognizing "accessible" and "original" sentences, where:
- "accessible" - "Label_0": sentence, that can be considered as comprehensible (regarding to Plain Language directives)
- "original" - "Label_1": sentence, that needs to rephrased in order to follow Plain Language Guidelines.
Training
Fine-tuned version of the original huBERT model (SZTAKI-HLT/hubert-base-cc
), trained on information materials provided by NAV linguistic experts.
Eval results
Class | Precision | Recall | F-Score |
---|---|---|---|
Accessible / Label_0 | 0.71 | 0.79 | 0.75 |
Original / Label_1 | 0.76 | 0.67 | 0.71 |
accuracy | 0.73 | ||
macro avg | 0.74 | 0.73 | 0.73 |
weighted avg | 0.74 | 0.73 | 0.73 |
Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("uvegesistvan/huBERTPlain")
model = AutoModelForSequenceClassification.from_pretrained("uvegesistvan/huBERTPlain")
BibTeX entry and citation info
If you use the model, please cite the following dissertation (to be submitted for workshop discussion):
Bibtex:
@PhDThesis{ Uveges:2024,
author = {{"U}veges, Istv{\'a}n},
title = {K{\"o}z{\'e}rthet{\"o} és automatiz{\'a}ci{\'o} - k{\'i}s{\'e}rletek a jog, term{\'e}szetesnyelv-feldolgoz{\'a}s {\'e}s informatika hat{\'a}r{\'a}n.},
year = {2024},
school = {Szegedi Tudom{\'a}nyegyetem}
}
- Downloads last month
- 23
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Evaluation results
- accuracyself-reported0.730
- f1self-reported0.730