Fill-Mask
Transformers
PyTorch
Safetensors
Italian
xlm-roberta
Inference Endpoints
Edit model card


  
    Model: FLARE 🔥
    Lang: IT
  

Introduction

This model is a lightweight and uncased version of MiniLM [1] for the Italian language. Its 17M parameters and 67MB size make it 85% lighter than a typical mono-lingual BERT model. It is ideal when memory consumption and execution speed are critical while maintaining high-quality results.

AILC CLiC-IT 2023 Proceedings

Flare-IT is part of the publication "Blaze-IT: a lightweight BERT model for the Italian language", which has been accepted at AILC CLiC-IT 2023 and published in the conference proceedings.
You can find the proceedings here: https://clic2023.ilc.cnr.it/proceedings/
And the published paper here: https://ceur-ws.org/Vol-3596/paper43.pdf

Model description

The model builds on mMiniLMv2 [1] (from Microsoft: L6xH384 mMiniLMv2) as a starting point, focusing it on the Italian language while at the same time turning it into an uncased model by modifying the embedding layer (as in [2], but computing document-level frequencies over the Wikipedia dataset and setting a frequency threshold of 0.1%), which brings a considerable reduction in the number of parameters.

To compensate for the deletion of cased tokens, which now forces the model to exploit lowercase representations of words previously capitalized, the model has been further pre-trained on the Italian split of the Wikipedia dataset, using the whole word masking [3] technique to make it more robust to the new uncased representations.

The resulting model has 17M parameters, a vocabulary of 14.610 tokens, and a size of 67MB, which makes it 85% lighter than a typical mono-lingual BERT model and 75% lighter than a standard mono-lingual DistilBERT model.

Training procedure

The model has been trained for masked language modeling on the Italian Wikipedia (~3GB) dataset for 10K steps, using the AdamW optimizer, with a batch size of 512 (obtained through 128 gradient accumulation steps), a sequence length of 512, and a linearly decaying learning rate starting from 5e-5. The training has been performed using dynamic masking between epochs and exploiting the whole word masking technique.

Performances

The following metrics have been computed on the Part of Speech Tagging and Named Entity Recognition tasks, using the UD Italian ISDT and WikiNER datasets, respectively. The PoST model has been trained for 5 epochs, and the NER model for 3 epochs, both with a constant learning rate, fixed at 1e-5. For Part of Speech Tagging, the metrics have been computed on the default test set provided with the dataset, while for Named Entity Recognition the metrics have been computed with a 5-fold cross-validation

Task Recall Precision F1
Part of Speech Tagging 95.64 95.32 95.45
Named Entity Recognition 82.27 80.64 81.29

The metrics have been computed at the token level and macro-averaged over the classes.

Demo

You can try the model online (fine-tuned on named entity recognition) using this web app: https://huggingface.co/spaces/osiria/flare-it-demo

Quick usage

from transformers import AutoTokenizer, XLMRobertaForMaskedLM
from transformers import pipeline

tokenizer = AutoTokenizer.from_pretrained("osiria/flare-it")
model = XLMRobertaForMaskedLM.from_pretrained("osiria/flare-it")
pipeline_mlm = pipeline(task="fill-mask", model=model, tokenizer=tokenizer)

Limitations

This lightweight model has been further pre-trained on Wikipedia, so it's particularly suitable as an agile analyzer for large volumes of natively digital text from the world wide web, written in a correct and fluent form (like wikis, web pages, news, etc.). However, it may show limitations when it comes to chaotic text, containing errors and slang expressions (like social media posts) or when it comes to domain-specific text (like medical, financial or legal content).

References

[1] https://arxiv.org/abs/2012.15828

[2] https://arxiv.org/abs/2010.05609

[3] https://arxiv.org/abs/1906.08101

License

The model is released under MIT license

Downloads last month
31
Safetensors
Model size
16.6M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train osiria/flare-it

Space using osiria/flare-it 1

Collection including osiria/flare-it