Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -6,11 +6,17 @@ language:
|
|
6 |
- en
|
7 |
size_categories:
|
8 |
- 10K<n<100K
|
|
|
9 |
---
|
10 |
|
11 |
# scientific_lay_summarisation - PLOS - normalized
|
12 |
|
13 |
-
This dataset contains scientific lay summaries
|
|
|
|
|
|
|
|
|
|
|
14 |
## Data Cleaning
|
15 |
|
16 |
The text in both the "article" and "summary" columns was processed to ensure that punctuation and whitespace were consistent. The `fix_punct_whitespace` function was applied to each text sample to:
|
@@ -38,14 +44,12 @@ The datasets can be loaded using the `pandas` library or using the `datasets` li
|
|
38 |
To use the processed datasets, load the desired parquet file(s) using `pandas` or `datasets`. Here is an example using `pandas`:
|
39 |
|
40 |
```python
|
41 |
-
|
42 |
import pandas as pd
|
43 |
|
44 |
-
# Load
|
45 |
df = pd.read_parquet("scientific_lay_summarisation-plos-norm/train.parquet")
|
46 |
-
|
47 |
-
# Print the first few rows
|
48 |
-
print(df.head())
|
49 |
```
|
50 |
|
51 |
|
@@ -53,11 +57,10 @@ print(df.head())
|
|
53 |
And here is an example using `datasets`:
|
54 |
|
55 |
```python
|
56 |
-
|
57 |
from datasets import load_dataset
|
58 |
|
59 |
dataset = load_dataset("pszemraj/scientific_lay_summarisation-plos-norm")
|
60 |
-
|
61 |
# Print the first few samples
|
62 |
for i in range(5):
|
63 |
print(dataset[i])
|
|
|
6 |
- en
|
7 |
size_categories:
|
8 |
- 10K<n<100K
|
9 |
+
source_datasets: tomasg25/scientific_lay_summarisation
|
10 |
---
|
11 |
|
12 |
# scientific_lay_summarisation - PLOS - normalized
|
13 |
|
14 |
+
This dataset is a modified version of [tomasg25/scientific_lay_summarization](https://huggingface.co/datasets/tomasg25/scientific_lay_summarisation) and contains scientific lay summaries that have been preprocessed [with this code](https://gist.github.com/pszemraj/bd344637af7c0c10ecf4ab62c4d0ce91). The preprocessing includes fixing punctuation and whitespace problems, and calculating the token length of each text sample using a tokenizer from the T5 model.
|
15 |
+
|
16 |
+
Original dataset details:
|
17 |
+
- **Repository:** https://github.com/TGoldsack1/Corpora_for_Lay_Summarisation
|
18 |
+
- **Paper:** [Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature](https://arxiv.org/abs/2210.09932)
|
19 |
+
|
20 |
## Data Cleaning
|
21 |
|
22 |
The text in both the "article" and "summary" columns was processed to ensure that punctuation and whitespace were consistent. The `fix_punct_whitespace` function was applied to each text sample to:
|
|
|
44 |
To use the processed datasets, load the desired parquet file(s) using `pandas` or `datasets`. Here is an example using `pandas`:
|
45 |
|
46 |
```python
|
47 |
+
# download the dataset files by clicking on 'use in datasets' and cloning
|
48 |
import pandas as pd
|
49 |
|
50 |
+
# Load train set
|
51 |
df = pd.read_parquet("scientific_lay_summarisation-plos-norm/train.parquet")
|
52 |
+
print(df.info())
|
|
|
|
|
53 |
```
|
54 |
|
55 |
|
|
|
57 |
And here is an example using `datasets`:
|
58 |
|
59 |
```python
|
|
|
60 |
from datasets import load_dataset
|
61 |
|
62 |
dataset = load_dataset("pszemraj/scientific_lay_summarisation-plos-norm")
|
63 |
+
train_set = dataset['train']
|
64 |
# Print the first few samples
|
65 |
for i in range(5):
|
66 |
print(dataset[i])
|