Edit model card

This model predicts the punctuation of Dutch texts. We developed it to restore the punctuation of transcribed spoken language. This model was trained on the Europarl Dataset. The model restores the following punctuation markers: "." "," "?" "-" ":"

Sample Code

We provide a simple python package that allows you to process text of any length.

Install

To get started install the package from pypi:

pip install deepmultilingualpunctuation

Restore Punctuation

from deepmultilingualpunctuation import PunctuationModel
model = PunctuationModel(model="oliverguhr/fullstop-dutch-punctuation-prediction")
text = "hervatting van de zitting ik verklaar de zitting van het europees parlement die op vrijdag 17 december werd onderbroken te zijn hervat"
result = model.restore_punctuation(text)
print(result)

output

hervatting van de zitting ik verklaar de zitting van het europees parlement, die op vrijdag 17 december werd onderbroken, te zijn hervat.

Predict Labels

from deepmultilingualpunctuation import PunctuationModel

model = PunctuationModel(model="oliverguhr/fullstop-dutch-punctuation-prediction")
text = "hervatting van de zitting ik verklaar de zitting van het europees parlement die op vrijdag 17 december werd onderbroken te zijn hervat"
clean_text = model.preprocess(text)
labled_words = model.predict(clean_text)
print(labled_words)

output

[['hervatting', '0', 0.9999777], ['van', '0', 0.99998415], ['de', '0', 0.999987], ['zitting', '0', 0.9992779], ['ik', '0', 0.9999889], ['verklaar', '0', 0.99998295], ['de', '0', 0.99998856], ['zitting', '0', 0.9999895], ['van', '0', 0.9999902], ['het', '0', 0.999992], ['europees', '0', 0.9999924], ['parlement', ',', 0.9915131], ['die', '0', 0.99997807], ['op', '0', 0.9999882], ['vrijdag', '0', 0.9999746], ['17', '0', 0.99998784], ['december', '0', 0.99997866], ['werd', '0', 0.9999888], ['onderbroken', ',', 0.99287957], ['te', '0', 0.9999864], ['zijn', '0', 0.99998176], ['hervat', '.', 0.99762934]]

Results

The performance differs for the single punctuation markers as hyphens and colons, in many cases, are optional and can be substituted by either a comma or a full stop. The model achieves the following F1 scores:

Label Dutch
0 0.993588
. 0.961450
? 0.848506
, 0.810883
: 0.655212
- 0.461591
macro average 0.788538
micro average 0.983492

How to cite us

@misc{https://doi.org/10.48550/arxiv.2301.03319,
  doi = {10.48550/ARXIV.2301.03319},
  url = {https://arxiv.org/abs/2301.03319},
  author = {Vandeghinste, Vincent and Guhr, Oliver},
  keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7},
  title = {FullStop:Punctuation and Segmentation Prediction for Dutch with Transformers},
  publisher = {arXiv},
  year = {2023},  
  copyright = {Creative Commons Attribution Share Alike 4.0 International}
}
Downloads last month
95
Safetensors
Model size
116M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train oliverguhr/fullstop-dutch-punctuation-prediction

Collection including oliverguhr/fullstop-dutch-punctuation-prediction