File size: 6,110 Bytes
95a811e
 
6061e80
 
 
 
 
 
95a811e
6061e80
 
 
 
 
 
 
8d7884f
6061e80
 
 
 
 
a4c2efd
 
 
6061e80
 
 
 
 
 
 
243cae9
6061e80
243cae9
a4c2efd
315414f
 
 
 
6061e80
243cae9
6061e80
243cae9
7919933
243cae9
315414f
7919933
6061e80
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
243cae9
6061e80
 
 
 
 
 
 
 
 
243cae9
6061e80
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8d7884f
6061e80
 
 
 
 
 
 
7919933
6061e80
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
license: cc-by-sa-4.0
task_categories:
- text-generation
- text2text-generation
- summarization
language:
- en
---
# Dataset Card for AbLit

## Dataset Description

- **Homepage:** https://github.com/roemmele/AbLit
- **Repository:** https://github.com/roemmele/AbLit
- **Paper:** https://arxiv.org/pdf/2302.06579.pdf
- **Point of Contact:** [email protected]

### Dataset Summary

The AbLit dataset contains **ab**ridged versions of 10 classic English **lit**erature books, aligned with their original versions on various passage levels.
The abridgements were written and made publically available by Emma Laybourn [here](http://www.englishliteratureebooks.com/classicnovelsabridged.html).
This is the first known dataset for NLP research that focuses on the abridgement task. 

See the paper for a detailed description of the dataset, as well as the results of several modeling experiments. The GitHub repo also provides more extensive ways to interact with the data beyond what is provided here. 

### Languages

English

## Dataset Structure

Each passage in the original version of a book chapter is aligned with its corresponding passage in the abridged version. These aligned pairs are available for various passage sizes: sentences, paragraphs, and multi-paragraph "chunks". The passage size is specified when loading the dataset. There are train/dev/test splits for items of each size.

| Passage Size  | Description | # Train | # Dev | # Test |
| --------------------- | ------------- | ------- | ------- | ------- |
| chapters | Each passage is a single chapter | 808 | 10 | 50
| sentences  | Each passage is a sentence delimited by the NLTK sentence tokenizer | 122,219 | 1,143 | 10,431 |
| paragraphs | Each passage is a paragraph delimited by a line break | 37,227 | 313 | 3,125 |
| chunks-10-sentences | Each passage consists of up to X=10 number of sentences, which may span more than one paragraph. To derive chunks with other lengths X, see GitHub repo above | 14,857 | 141 | 1,264

#### Example Usage

To load aligned sentences:
```
from datasets import load_dataset
data = load_dataset("ablit", "chunks-10-sentences")
```

### Data Fields

Original: passage text in the original version
Abridged: passage text in the abridged version
Book: title of book containing passage
Chapter: title of chapter containing passage

## Dataset Creation

### Curation Rationale

Abridgement is the task of making a text easier to understand while preserving its linguistic qualities. Abridgements are different from typical summaries: whereas summaries abstractively describe the original text, abridgements simplify the original primarily through a process of extraction. We present this dataset to promote further research on modeling the abridgement process. 

### Source Data

The author Emma Laybourn wrote abridged versions of classic English literature books available through Project Gutenberg. She has also provided her abridgements for free on her [website](http://www.englishliteratureebooks.com/classicnovelsabridged.html). This is how she describes her work: “This is a collection of famous novels which have been shortened and slightly simplified for the general reader. These are not summaries; each is half to two-thirds of the original length. I’ve selected works that people often find daunting because of their density or complexity: the aim is to make them easier to read, while keeping the style intact.”

#### Initial Data Collection and Normalization

We obtained the original and abridged versions of the books from the respective websites.

#### Who are the source language producers?

Emma Laybourn

### Annotations

#### Annotation process

We designed a procedure for automatically aligning passages between the original and abridged version of each chapter. We conducted a human evaluation to verify these alignments had high accuracy. The training split of the dataset has ~99% accuracy. The dev and test splits of the dataset were fully human-validated to ensure 100% accuracy. See the paper for further explanation.

#### Who are the annotators?

The alignment accuracy evaluation was conducted by the authors of the paper, who have expertise in linguistics and NLP.

### Personal and Sensitive Information

None

## Considerations for Using the Data

### Social Impact of Dataset

We hope this dataset will promote more research on the authoring process for producing abridgements, including models for automatically generating abridgements. Because it is a labor-intensive writing task, there are relatively few abridged versions of books. Systems that automatically produce abridgements could vastly expand the number of abridged versions of books and thus increase their readership. 

### Discussion of Biases

We present this dataset to introduce abridgement as an NLP task, but these abridgements are scoped to one small set of texts associated with a specific domain and author. There are significant practical reasons for this limited scope. In particular, in constrast to the books in AbLit, most recently published books are not included in publicly accessible datasets due to copyright restrictions, and the same restrictions typically apply to any abridgements of these books. For this reason, AbLit consists of British English literature from the 18th and 19th centuries. Some of the linguistic properties of these original books do not generalize to other types of English texts that would be beneficial to abridge. Moreover, the narrow cultural perspective reflected in these books is certainly not representative of the diverse modern population. Readers may find some content contradictory with their own values. 

### Dataset Curators

The curators are the authors of the paper.

### Licensing Information

cc-by-sa-4.0

### Citation Information

Roemmele, Melissa, Kyle Shaffer, Katrina Olsen, Yiyi Wang, and Steve DeNeefe. "AbLit: A Resource for Analyzing and Generating Abridged Versions of English Literature." Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume (2023).