#HackathonSomosNLP 22: Cool Projects
Collection
31 items
•
Updated
•
4
Model based on the Roberta architecture finetuned on BERTIN for readability assessment of Spanish texts.
This version of the model was trained on a mix of datasets, using sentence-level granularity when possible. The model performs binary classification among the following classes:
It achieves a F1 macro average score of 0.8923, measured on the validation set.
readability-es-sentences
(this model). Two classes, sentence-based dataset.readability-es-paragraphs
. Two classes, paragraph-based dataset.readability-es-3class-sentences
. Three classes, sentence-based dataset.readability-es-3class-paragraphs
. Three classes, paragraph-based dataset.readability-es-hackathon-pln-public
, composed of:Please, refer to this training run for full details on hyperparameters and training regime.