File size: 1,902 Bytes
f7d2c61
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- ru
size_categories:
- 10K<n<100K
license:
- unknown
multilinguality:
- monolingual
source_datasets:
- original
---

# Dataset Card

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Additional Information](#additional-information)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)

## Dataset Description

### Dataset Summary

Dataset for automatic text segmentation of Russian wiki. Text corpora based on May 2023 Wikipedia dump. Markup was generated automatically based on 2 methods: taking texts with ready division into paragraphs and random joining parts of different texts.

### Supported Tasks and Leaderboards

Dataset designed for text segmentation task.

### Languages

The dataset is in Russian.

### Usage

```python
from datasets import load_dataset
dataset = load_dataset('mlenjoyneer/RuTextSegWiki')
```

### Other datasets

mlenjoyneer/RuTextSegNews - similar dataset based on news corpora

## Dataset Structure

### Data Instances

For each instance, there is a list of strings for text sentences, a list of ints for labels (1 is new topic starting and 0 is previous topic continuation) and a string for sample generation method (base or random_joining).

### Data Splits

| Dataset Split | Number of Instances in Split |
|:---------|:---------|
| Train | 20000 |
| Test | 4000 |

## Additional Information

### Licensing Information

In progress

### Citation Information

```bibtex
In progress
```