--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-tiny-historic-multilingual-cased widget: - text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι . --- # Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT Tiny as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[4, 8]` * Learning Rates: `[5e-05, 3e-05]` And report micro F1-score on development set: | Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average | |-------------------|--------------|-----------------|--------------|--------------|--------------|-----------------| | `bs4-e10-lr5e-05` | [0.5579][1] | [**0.5082**][2] | [0.5434][3] | [0.4949][4] | [0.4882][5] | 0.5185 ± 0.0306 | | `bs8-e10-lr5e-05` | [0.5301][6] | [0.468][7] | [0.525][8] | [0.4989][9] | [0.4707][10] | 0.4985 ± 0.0292 | | `bs4-e10-lr3e-05` | [0.4972][11] | [0.427][12] | [0.4745][13] | [0.4394][14] | [0.4249][15] | 0.4526 ± 0.0319 | | `bs8-e10-lr3e-05` | [0.4402][16] | [0.3531][17] | [0.4141][18] | [0.3808][19] | [0.4073][20] | 0.3991 ± 0.0333 | [1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️