Osiria "Water" Series 💧
Collection
This collection is composed of adaptable and flexible models, which blend robustness and usability
•
3 items
•
Updated
This is a DistilBERT [1] model for the Italian language, obtained using the multilingual DistilBERT (distilbert-base-multilingual-cased) as a starting point and focusing it on the Italian language by modifying the embedding layer (as in [2], computing document-level frequencies over the Wikipedia dataset)
The resulting model has 67M parameters, a vocabulary of 30.785 tokens, and a size of ~270 MB.
from transformers import BertTokenizerFast, DistilBertModel
tokenizer = DistilBertTokenizerFast.from_pretrained("osiria/distilbert-base-italian-cased")
model = DistilBertModel.from_pretrained("osiria/distilbert-base-italian-cased")
[1] https://arxiv.org/abs/1910.01108
[2] https://arxiv.org/abs/2010.05609
The model is released under Apache-2.0 license