--- language: - pl - cs - ru tags: - mT5 - lemmatization license: apache-2.0 --- # SlavLemma Base SlavLemma models are intended for lemmatization of named entities and multi-word expressions in Polish, Czech and Russian languages. They were fine-tuned from the google/mT5 models, e.g.: [google/mt5-base](https://huggingface.co/google/mt5-base). ## Usage When using the model, prepend one of the language tokens (`>>pl<<`, `>>cs<<`, `>>ru<<`) to the input, based on the language of the phrase you want to lemmatize. Sample usage: ``` from transformers import pipeline pipe = pipeline(task="text2text-generation", model="amu-cai/slavlemma-base", tokenizer="amu-cai/slavlemma-base") hyp = [res['generated_text'] for res in pipe([">>pl<< federalnego urzędu statystycznego"], clean_up_tokenization_spaces=True, num_beams=5)][0] ``` ## Evaluation results Lemmatization Exact Match was computed on the SlavNER 2021 test sets (COVID-19 and USA 2020 Elections). COVID-19: | Model | pl | cs | ru | | :------ | ------: | ------: | ------: | | [slavlemma-large](https://huggingface.co/amu-cai/slavlemma-large) | 93.76 | 89.80 | 77.30 | [slavlemma-base](https://huggingface.co/amu-cai/slavlemma-base) | 91.00 |86.29| 76.10 | [slavlemma-small](https://huggingface.co/amu-cai/slavlemma-small)| 86.80 |80.98| 73.83 USA 2020 Elections: | Model | pl | cs | ru | | :------ | ------: | ------: | ------: | | [slavlemma-large](https://huggingface.co/amu-cai/slavlemma-large) | 89.12 | 87.27| 82.50 | [slavlemma-base](https://huggingface.co/amu-cai/slavlemma-base) | 84.19 |81.97| 80.27 | [slavlemma-small](https://huggingface.co/amu-cai/slavlemma-small)| 78.85 |75.86| 76.18 ## Citation If you use the model, please cite the following paper: ``` @inproceedings{palka-nowakowski-2023-exploring, title = "Exploring the Use of Foundation Models for Named Entity Recognition and Lemmatization Tasks in {S}lavic Languages", author = "Pa{\l}ka, Gabriela and Nowakowski, Artur", booktitle = "Proceedings of the 9th Workshop on Slavic Natural Language Processing 2023 (SlavicNLP 2023)", month = may, year = "2023", address = "Dubrovnik, Croatia", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.bsnlp-1.19", pages = "165--171", abstract = "This paper describes Adam Mickiewicz University{'}s (AMU) solution for the 4th Shared Task on SlavNER. The task involves the identification, categorization, and lemmatization of named entities in Slavic languages. Our approach involved exploring the use of foundation models for these tasks. In particular, we used models based on the popular BERT and T5 model architectures. Additionally, we used external datasets to further improve the quality of our models. Our solution obtained promising results, achieving high metrics scores in both tasks. We describe our approach and the results of our experiments in detail, showing that the method is effective for NER and lemmatization in Slavic languages. Additionally, our models for lemmatization will be available at: https://huggingface.co/amu-cai.", } ``` ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2