RoBERTa Greek small model (Uncased)
Prerequisites
transformers==4.19.2
Model architecture
This model uses approximately half the size of RoBERTa base model parameters.
Tokenizer
Using BPE tokenizer with vocabulary size 50,000.
Training Data
- Subset of CC-100/el : Monolingual Datasets from Web Crawl Data
- Subset of oscar
- wiki40b/el (Greek Wikipedia)
Usage
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ClassCat/roberta-small-greek')
unmasker("Έχει πολύ καιρό που δεν <mask>.")
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.