Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,9 @@ language:
|
|
20 |
|
21 |
The AbLit dataset contains **ab**ridged versions of 10 classic English **lit**erature books, aligned with their original versions on various passage levels.
|
22 |
The abridgements were written and made publically available by Emma Laybourn [here](http://www.englishliteratureebooks.com/classicnovelsabridged.html).
|
23 |
-
This is the first known dataset for NLP research that focuses on the abridgement task.
|
|
|
|
|
24 |
|
25 |
### Languages
|
26 |
|
@@ -31,7 +33,7 @@ English
|
|
31 |
Each passage in the original version of a book chapter is aligned with its corresponding passage in the abridged version. These aligned pairs are available for various passage sizes: sentences, paragraphs, and multi-paragraph "chunks". The passage size is specified when loading the dataset. There are train/dev/test splits for items of each size.
|
32 |
|
33 |
| Passage Size | Description | # Train | # Dev | # Test |
|
34 |
-
|
|
35 |
| chapters | Each passage is a single chapter | 808 | 10 | 50
|
36 |
| sentences | Each passage is a sentence delimited by the NLTK sentence tokenizer | 122,219 | 1,143 | 10,431 |
|
37 |
| paragraphs | Each passage is a paragraph delimited by a line break | 37,227 | 313 | 3,125 |
|
|
|
20 |
|
21 |
The AbLit dataset contains **ab**ridged versions of 10 classic English **lit**erature books, aligned with their original versions on various passage levels.
|
22 |
The abridgements were written and made publically available by Emma Laybourn [here](http://www.englishliteratureebooks.com/classicnovelsabridged.html).
|
23 |
+
This is the first known dataset for NLP research that focuses on the abridgement task.
|
24 |
+
|
25 |
+
See the paper for a detailed description of the dataset, as well as the results of several modeling experiments. The GitHub repo also provides more extensive ways to interact with the data beyond what is provided here.
|
26 |
|
27 |
### Languages
|
28 |
|
|
|
33 |
Each passage in the original version of a book chapter is aligned with its corresponding passage in the abridged version. These aligned pairs are available for various passage sizes: sentences, paragraphs, and multi-paragraph "chunks". The passage size is specified when loading the dataset. There are train/dev/test splits for items of each size.
|
34 |
|
35 |
| Passage Size | Description | # Train | # Dev | # Test |
|
36 |
+
| --------------------- | ------------- | ------- | ------- | ------- |
|
37 |
| chapters | Each passage is a single chapter | 808 | 10 | 50
|
38 |
| sentences | Each passage is a sentence delimited by the NLTK sentence tokenizer | 122,219 | 1,143 | 10,431 |
|
39 |
| paragraphs | Each passage is a paragraph delimited by a line break | 37,227 | 313 | 3,125 |
|