dmahata commited on
Commit
ea51f31
1 Parent(s): 9f52c40

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ Original source of the data - [https://github.com/ygorg/KPTimes](https://github.
15
 
16
  KPTimes is a large scale dataset comprising of 279,923 news articles from NY Times and 10K from JPTimes. It is one of the datasets which has annotations of keyphrases curated by the editors who can be considered as experts. The motivation of the authors behind producing this dataset was to have a large dataset for training neural models for keyphrase generation in a domain other than the scientific domain, and to understand the differences between keyphrases annotated by experts and non-experts. The authors show that the editors tend to assign generic keyphrases that are not present in the actual news article's text, with 55% of them being abstractive keyphrases. The keyphrases in the news domain as presented in this work were also on an average shorter (1.4 words) than those in the scientific datasets (2.4 words).
17
 
18
- The dataset is randomly divided into train (92.8%), validation (3.6%) and test (3.6%) splits. In order to enable the models trained on this dataset to generalize well the authors did not want to have the entire data taken from a single source (NY Times), and therefore added 10K more articles from JPTimes dataset. The authors collected free to read article URLs from NY Times spanning from 2006 to 2017, and obtained their corresponding HTML pages from the Internet Archive. They cleaned the HTML tags and extracted the title, and the main content of the articles using heuristics. The gold keyphrases were obtained from the metadata fields - *news_keywords* and *keywords*. The documents in the dataset are full-length news articles, which also makes it a suitable dataset for developing models that can identify keyphrases from long documents.
19
 
20
  <br>
21
  <p align="center">
 
15
 
16
  KPTimes is a large scale dataset comprising of 279,923 news articles from NY Times and 10K from JPTimes. It is one of the datasets which has annotations of keyphrases curated by the editors who can be considered as experts. The motivation of the authors behind producing this dataset was to have a large dataset for training neural models for keyphrase generation in a domain other than the scientific domain, and to understand the differences between keyphrases annotated by experts and non-experts. The authors show that the editors tend to assign generic keyphrases that are not present in the actual news article's text, with 55% of them being abstractive keyphrases. The keyphrases in the news domain as presented in this work were also on an average shorter (1.4 words) than those in the scientific datasets (2.4 words).
17
 
18
+ The dataset is randomly divided into train (92.8%), validation (3.6%) and test (3.6%) splits. In order to enable the models trained on this dataset to generalize well the authors did not want to have the entire data taken from a single source (NY Times), and therefore added 10K more articles from JPTimes dataset. The authors collected free to read article URLs from NY Times spanning from 2006 to 2017, and obtained their corresponding HTML pages from the Internet Archive. They cleaned the HTML tags and extracted the title, and the main content of the articles using heuristics. The gold keyphrases were obtained from the metadata fields - *news_keywords* and *keywords*. The documents in the dataset are full-length news articles, which also makes it a suitable dataset for developing models for identifying keyphrases from long documents.
19
 
20
  <br>
21
  <p align="center">