Datasets:
File size: 3,349 Bytes
be991b4 cf7e520 0a8377c cf7e520 19ce822 be991b4 cf7e520 19ce822 cf7e520 0a8377c cf7e520 fc5486c cf7e520 0a8377c cf7e520 19ce822 cf7e520 19ce822 cf7e520 19ce822 cf7e520 19ce822 cf7e520 fc5486c 8651803 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
---
license: mit
task_categories:
- summarization
- text2text-generation
language:
- en
size_categories:
- 10K<n<100K
source_datasets: tomasg25/scientific_lay_summarisation
---
# scientific_lay_summarisation - PLOS - normalized
This dataset is a modified version of [tomasg25/scientific_lay_summarization](https://huggingface.co/datasets/tomasg25/scientific_lay_summarisation) and contains scientific lay summaries that have been preprocessed [with this code](https://gist.github.com/pszemraj/bd344637af7c0c10ecf4ab62c4d0ce91). The preprocessing includes fixing punctuation and whitespace problems, and calculating the token length of each text sample using a tokenizer from the T5 model.
Original dataset details:
- **Repository:** https://github.com/TGoldsack1/Corpora_for_Lay_Summarisation
- **Paper:** [Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature](https://arxiv.org/abs/2210.09932)
## Data Cleaning
The text in both the "article" and "summary" columns was processed to ensure that punctuation and whitespace were consistent. The `fix_punct_whitespace` function was applied to each text sample to:
- Remove spaces before punctuation marks (except for parentheses)
- Add a space after punctuation marks (except for parentheses) if missing
- Handle spaces around parentheses
- Add a space after a closing parenthesis if followed by a word or opening parenthesis
- Handle spaces around quotation marks
- Handle spaces around single quotes
- Handle comma in numbers
## Tokenization
The length of each text sample was calculated in terms of tokens using the T5 tokenizer. The `calculate_token_length` function was used to encode each text sample using the tokenizer and return the number of resulting tokens. The resulting token lengths were added as new columns to the dataframes.
## Data Format
The resulting processed data files are stored in Apache parquet and can be loaded using the `pandas' library or the `datasets' library from the Hugging Face transformers package. The relevant column names and data types for summarization are
```python
DatasetDict({
train: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 24773
})
test: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 1376
})
validation: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 1376
})
})
```
## Usage
Load the desired parquet file(s) using `pandas` or `datasets`. Here is an example using `pandas`:
```python
# download the dataset files by clicking on 'use in datasets' and cloning
import pandas as pd
# Load train set
df = pd.read_parquet("scientific_lay_summarisation-plos-norm/train.parquet")
print(df.info())
```
And here is an example using `datasets`:
```python
from datasets import load_dataset
dataset = load_dataset("pszemraj/scientific_lay_summarisation-plos-norm")
train_set = dataset['train']
# Print the first few samples
for i in range(5):
print(train_set[i])
```
## Token Lengths
For train split:
![train-lengths](https://i.imgur.com/EXfC9kz.png)
---
|