Update README.md
Browse files
README.md
CHANGED
@@ -17,6 +17,15 @@ KPTimes is a large scale dataset comprising of 279,923 news articles from New Yo
|
|
17 |
|
18 |
The dataset is randomly divided into train (92.8%), validation (3.6%) and test (3.6%) splits. In order to enable the models trained on this dataset to generalize well the authors did not want to have the entire data taken from a single source (NY Times), and therefore added 10K more articles from JPTimes dataset. The authors collected free to read article URLs from NY Times spanning from 2006 to 2017, and obtained their corresponding HTML pages from the Internet Archive. They cleaned the HTML tags and extracted the title, and the main content of the articles using heuristics. The gold keyphrases were obtained from the metadata fields - *news_keywords* and *keywords*.
|
19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
## Dataset Structure
|
21 |
|
22 |
|
|
|
17 |
|
18 |
The dataset is randomly divided into train (92.8%), validation (3.6%) and test (3.6%) splits. In order to enable the models trained on this dataset to generalize well the authors did not want to have the entire data taken from a single source (NY Times), and therefore added 10K more articles from JPTimes dataset. The authors collected free to read article URLs from NY Times spanning from 2006 to 2017, and obtained their corresponding HTML pages from the Internet Archive. They cleaned the HTML tags and extracted the title, and the main content of the articles using heuristics. The gold keyphrases were obtained from the metadata fields - *news_keywords* and *keywords*.
|
19 |
|
20 |
+
<br>
|
21 |
+
<p align="center">
|
22 |
+
<img src="https://huggingface.co/datasets/midas/kptimes/resolve/main/KPTimesExample.png" alt="KPTimes sample" width="90%"/>
|
23 |
+
<br>
|
24 |
+
</p>
|
25 |
+
<br>
|
26 |
+
|
27 |
+
|
28 |
+
|
29 |
## Dataset Structure
|
30 |
|
31 |
|