pszemraj commited on
Commit
0a8377c
1 Parent(s): 19ce822

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -2,6 +2,7 @@
2
  license: mit
3
  task_categories:
4
  - summarization
 
5
  language:
6
  - en
7
  size_categories:
@@ -32,16 +33,15 @@ The text in both the "article" and "summary" columns was processed to ensure tha
32
  The length of each text sample was calculated in terms of tokens using the T5 tokenizer. The `calculate_token_length` function was used to encode each text sample using the tokenizer and return the number of resulting tokens. The resulting token lengths were added as new columns to the dataframes.
33
  ## Data Format
34
 
35
- The resulting processed datasets are saved in separate directories as parquet files. The directories are named according to the dataset and split name, and each directory contains three parquet files for the train, test, and validation splits.
36
 
37
- The datasets can be loaded using the `pandas` library or using the `datasets` library from the Hugging Face transformers package. The column names and data types are as follows:
38
  - `article`: the scientific article text (string)
39
  - `summary`: the lay summary text (string)
40
- - `article_length`: the length of the article in terms of tokens (int)
41
- - `summary_length`: the length of the summary in terms of tokens (int)
42
  ## Usage
43
 
44
- To use the processed datasets, load the desired parquet file(s) using `pandas` or `datasets`. Here is an example using `pandas`:
45
 
46
  ```python
47
  # download the dataset files by clicking on 'use in datasets' and cloning
 
2
  license: mit
3
  task_categories:
4
  - summarization
5
+ - text2text-generation
6
  language:
7
  - en
8
  size_categories:
 
33
  The length of each text sample was calculated in terms of tokens using the T5 tokenizer. The `calculate_token_length` function was used to encode each text sample using the tokenizer and return the number of resulting tokens. The resulting token lengths were added as new columns to the dataframes.
34
  ## Data Format
35
 
36
+ The resulting processed data files are stored in Apache parquet and can be loaded using the `pandas' library or the `datasets' library from the Hugging Face transformers package. The relevant column names and data types for summarization are
37
 
 
38
  - `article`: the scientific article text (string)
39
  - `summary`: the lay summary text (string)
40
+ - `article_length`: the length of the article in tokens (int)
41
+ - `summary_length`: the length of the summary in tokens (int)
42
  ## Usage
43
 
44
+ Load the desired parquet file(s) using `pandas` or `datasets`. Here is an example using `pandas`:
45
 
46
  ```python
47
  # download the dataset files by clicking on 'use in datasets' and cloning