|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- text-generation |
|
language: |
|
- ru |
|
size_categories: |
|
- 1K<n<10K |
|
--- |
|
|
|
# Dataset Card for RuSpellGold |
|
|
|
## Dataset Description |
|
|
|
- **Paper:** # TODO |
|
- **ArXiv:** # TODO |
|
- **Point of Contact:** [email protected] |
|
- **Language:** Russian |
|
|
|
### Dataset Summary |
|
|
|
RuSpellGold is a benchmark of 1711 sentence pairs dedicated to a problem of automatic spelling correction in Russian language. The dataset is gathered from five different domains including news, Russian classic literature, social media texts, open web and strategic documents. It has been passed through two-stage manual labeling process with native speakers as annotators to correct spelling violation and preserve original style of text at the same time. |
|
|
|
## Dataset Structure |
|
|
|
### Supported Tasks and Leaderboards |
|
- **Task:** automatic spelling correction. |
|
- **Metrics:** https://www.dialog-21.ru/media/3427/sorokinaaetal.pdf. |
|
|
|
|
|
### Languages |
|
Russian. |
|
|
|
### Data Instances |
|
``` |
|
{ |
|
"sources": "Видела в городе афиши, анонсрующие ее концерт.", |
|
"corrections": "Видела в городе афиши, анонсирующие её концерт", |
|
"domain": "aranea" |
|
} |
|
``` |
|
|
|
|
|
### Data Fields |
|
|
|
- ```sources (str)```: original sentence. |
|
- ```corrections (str)```: corrected sentence. |
|
- ```domain (str)```: domain, from which the sentence is taken from. |
|
|
|
### Data Splits |
|
|
|
Current version of benchmark is only represented by test part: |
|
|
|
- ```test```: 1711 sentence pairs (```"data/test.csv"```). |
|
|
|
which is then splitted into following domain-relaited shards: |
|
|
|
- ```aranea```: 756 sentence pairs (```"data/aranea/split.csv"```); |
|
- ```literature```: 260 sentence pairs (```"data/literature/split.csv"```); |
|
- ```news```: 245 sentence pairs (```"data/news/split.csv"```); |
|
- ```social_media```: 200 sentence pairs (```"data/social_media/split.csv"```); |
|
- ```strategic_documents```: 250 sentence pairs (```"data/strategic_documents/split.csv"```); |
|
|
|
|
|
|
|
## Dataset Creation |
|
|
|
### Source Data |
|
|
|
|Source |Strategy |Domain | |
|
|---|---|---| |
|
|Vladimír Benko. 2014. Aranea: Yet another family of (comparable) web corpora. // Text, Speech and Dialogue: 17th International Conference, TSD 2014, Brno, Czech Republic, September 8-12, 2014. Proceedings 17, P 247–256. Springer| Random sentences from Araneum Russicum|Open web (aranea) | |
|
| Russian classic literature aggregated in this [corpus](https://www.kaggle.com/datasets/d0rj3228/russian-literature) | Random sentences | Literature | |
|
|Ilya Gusev. 2020. Dataset for automatic summarization of russian news. // Artificial Intelligence and Natural Language: 9th Conference, AINL 2020, Helsinki, Finland, October 7–9, 2020, Proceedings 9, P 122–134. Springer | Random sentences | News | |
|
|Social media platforms | Posts from social media platforms marked with specific hashtags | Social Media | |
|
|Vitaly Ivanin, Ekaterina Artemova, Tatiana Batura, Vladimir Ivanov, Veronika Sarkisyan, Elena Tutubalina, and Ivan Smurov. 2020. Rurebus-2020 shared task: Russian relation extraction for business. // Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference “Dialog” [Komp’iuternaia Lingvistika i Intellektual’nye Tehnologii: Trudy Mezhdunarodnoj Konferentsii “Dialog”], Moscow, Russia. | Random sentences | Strategic documents | |
|
|
|
|
|
### Annotations |
|
|
|
#### Annotation process |
|
All of the sentences undergo a two-stage annotation procedure on [Toloka](https://toloka.ai), a crowd-sourcing platform for data labeling. |
|
|
|
Each stage includes an unpaid training phase with explanations, control tasks for tracking annotation quality, and the main annotation task. Before starting, a worker is given detailed instructions describing the task, explaining the labels, and showing plenty of examples. |
|
The instruction is available at any time during both the training and main annotation phases. To get access to the main phase, the worker should first complete the training phase by labeling more than 70% of its examples correctly. To ensure high-quality expertise on the matter of spelling, we set up additional test phase on a small portion of data, manually revised the results and approved only those annotators, who managed to avoid any mistakes. |
|
|
|
- **Stage 1: Data gathering** |
|
We provide texts with possible mistakes to annotators and ask them to write the sentence correctly preserving the original style-markers of the text. |
|
|
|
- **Stage 2: Validation** |
|
We provide annotators with the pair of sentences (origin and its corresponding correction from the previous stage) and ask them to check if the correction is right. |
|
|
|
|
|
### Personal and Sensitive Information |
|
|
|
Each annotator is warned about potentially sensitive topics in data (e.g., politics, societal minorities, and religion). |
|
|
|
|
|
## Additional Information |
|
|
|
### Dataset Curators |
|
|
|
Correspondence: ```[email protected]``` |
|
|
|
### Licensing Information |
|
|
|
The corpus is available under the Apache 2.0 license. The copyright (where applicable) of texts from the linguistic publications and resources remains with the original authors or publishers. |
|
|
|
|
|
### Other |
|
|
|
Please refer to our paper # TODO for more details. |