RuTextSegWiki / README.md
mlenjoyneer's picture
Upload 4 files
f7d2c61
---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- ru
size_categories:
- 10K<n<100K
license:
- unknown
multilinguality:
- monolingual
source_datasets:
- original
---
# Dataset Card
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
### Dataset Summary
Dataset for automatic text segmentation of Russian wiki. Text corpora based on May 2023 Wikipedia dump. Markup was generated automatically based on 2 methods: taking texts with ready division into paragraphs and random joining parts of different texts.
### Supported Tasks and Leaderboards
Dataset designed for text segmentation task.
### Languages
The dataset is in Russian.
### Usage
```python
from datasets import load_dataset
dataset = load_dataset('mlenjoyneer/RuTextSegWiki')
```
### Other datasets
mlenjoyneer/RuTextSegNews - similar dataset based on news corpora
## Dataset Structure
### Data Instances
For each instance, there is a list of strings for text sentences, a list of ints for labels (1 is new topic starting and 0 is previous topic continuation) and a string for sample generation method (base or random_joining).
### Data Splits
| Dataset Split | Number of Instances in Split |
|:---------|:---------|
| Train | 20000 |
| Test | 4000 |
## Additional Information
### Licensing Information
In progress
### Citation Information
```bibtex
In progress
```