Edit model card

Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)

This Flair model was fine-tuned on the German HIPE-2020 NER Dataset using hmBERT 64k as backbone LM.

The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found here.

The following NEs were annotated: loc, org, pers, prod, time and comp.

Results

We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:

  • Batch Sizes: [4, 8]
  • Learning Rates: [3e-05, 5e-05]

And report micro F1-score on development set:

Configuration Seed 1 Seed 2 Seed 3 Seed 4 Seed 5 Average
bs8-e10-lr3e-05 0.7869 0.7909 0.7897 0.7868 0.7836 0.7876 ± 0.0028
bs4-e10-lr3e-05 0.7814 0.7767 0.7783 0.7747 0.7826 0.7787 ± 0.0033
bs8-e10-lr5e-05 0.7761 0.768 0.791 0.7758 0.7806 0.7783 ± 0.0084
bs4-e10-lr5e-05 0.7714 0.7733 0.7723 0.7739 0.7746 0.7731 ± 0.0013

The training log and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.

More information about fine-tuning can be found here.

Acknowledgements

We thank Luisa März, Katharina Schmid and Erion Çano for their fruitful discussions about Historic Language Models.

Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC). Many Thanks for providing access to the TPUs ❤️

Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for hmbert-64k/flair-hipe-2022-hipe2020-de

Finetuned
(259)
this model