Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -2,6 +2,7 @@
|
|
2 |
license: mit
|
3 |
task_categories:
|
4 |
- summarization
|
|
|
5 |
language:
|
6 |
- en
|
7 |
size_categories:
|
@@ -32,16 +33,15 @@ The text in both the "article" and "summary" columns was processed to ensure tha
|
|
32 |
The length of each text sample was calculated in terms of tokens using the T5 tokenizer. The `calculate_token_length` function was used to encode each text sample using the tokenizer and return the number of resulting tokens. The resulting token lengths were added as new columns to the dataframes.
|
33 |
## Data Format
|
34 |
|
35 |
-
The resulting processed
|
36 |
|
37 |
-
The datasets can be loaded using the `pandas` library or using the `datasets` library from the Hugging Face transformers package. The column names and data types are as follows:
|
38 |
- `article`: the scientific article text (string)
|
39 |
- `summary`: the lay summary text (string)
|
40 |
-
- `article_length`: the length of the article in
|
41 |
-
- `summary_length`: the length of the summary in
|
42 |
## Usage
|
43 |
|
44 |
-
|
45 |
|
46 |
```python
|
47 |
# download the dataset files by clicking on 'use in datasets' and cloning
|
|
|
2 |
license: mit
|
3 |
task_categories:
|
4 |
- summarization
|
5 |
+
- text2text-generation
|
6 |
language:
|
7 |
- en
|
8 |
size_categories:
|
|
|
33 |
The length of each text sample was calculated in terms of tokens using the T5 tokenizer. The `calculate_token_length` function was used to encode each text sample using the tokenizer and return the number of resulting tokens. The resulting token lengths were added as new columns to the dataframes.
|
34 |
## Data Format
|
35 |
|
36 |
+
The resulting processed data files are stored in Apache parquet and can be loaded using the `pandas' library or the `datasets' library from the Hugging Face transformers package. The relevant column names and data types for summarization are
|
37 |
|
|
|
38 |
- `article`: the scientific article text (string)
|
39 |
- `summary`: the lay summary text (string)
|
40 |
+
- `article_length`: the length of the article in tokens (int)
|
41 |
+
- `summary_length`: the length of the summary in tokens (int)
|
42 |
## Usage
|
43 |
|
44 |
+
Load the desired parquet file(s) using `pandas` or `datasets`. Here is an example using `pandas`:
|
45 |
|
46 |
```python
|
47 |
# download the dataset files by clicking on 'use in datasets' and cloning
|