RuTextSegWiki / README.md
mlenjoyneer's picture
Upload 4 files
f7d2c61
metadata
annotations_creators:
  - machine-generated
language_creators:
  - found
language:
  - ru
size_categories:
  - 10K<n<100K
license:
  - unknown
multilinguality:
  - monolingual
source_datasets:
  - original

Dataset Card

Table of Contents

Dataset Description

Dataset Summary

Dataset for automatic text segmentation of Russian wiki. Text corpora based on May 2023 Wikipedia dump. Markup was generated automatically based on 2 methods: taking texts with ready division into paragraphs and random joining parts of different texts.

Supported Tasks and Leaderboards

Dataset designed for text segmentation task.

Languages

The dataset is in Russian.

Usage

from datasets import load_dataset
dataset = load_dataset('mlenjoyneer/RuTextSegWiki')

Other datasets

mlenjoyneer/RuTextSegNews - similar dataset based on news corpora

Dataset Structure

Data Instances

For each instance, there is a list of strings for text sentences, a list of ints for labels (1 is new topic starting and 0 is previous topic continuation) and a string for sample generation method (base or random_joining).

Data Splits

Dataset Split Number of Instances in Split
Train 20000
Test 4000

Additional Information

Licensing Information

In progress

Citation Information

In progress