Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -39,12 +39,14 @@ The fields are post id (`id`), the content of the haiku (`processed_title`), upv
|
|
39 |
|
40 |
## Usage
|
41 |
|
42 |
-
|
43 |
|
44 |
```python
|
45 |
from datasets import load_dataset
|
46 |
|
47 |
-
d=load_dataset('huanggab/reddit_haiku')
|
48 |
>>> print(d['train'][0])
|
49 |
#{'Unnamed: 0': 0, 'id': '1020ac', 'processed_title': "There's nothing inside/There is nothing outside me/I search on in hope.", 'ups': 5, 'keywords': "[('inside', 0.5268), ('outside', 0.3751), ('search', 0.3367), ('hope', 0.272)]"}
|
50 |
```
|
|
|
|
|
|
39 |
|
40 |
## Usage
|
41 |
|
42 |
+
This dataset is intended for evaluation, hence there is only one split which is `test`.
|
43 |
|
44 |
```python
|
45 |
from datasets import load_dataset
|
46 |
|
47 |
+
d=load_dataset('huanggab/reddit_haiku', data_files='test':'merged_with_keywords.csv'}) # use data_files or it will result in error
|
48 |
>>> print(d['train'][0])
|
49 |
#{'Unnamed: 0': 0, 'id': '1020ac', 'processed_title': "There's nothing inside/There is nothing outside me/I search on in hope.", 'ups': 5, 'keywords': "[('inside', 0.5268), ('outside', 0.3751), ('search', 0.3367), ('hope', 0.272)]"}
|
50 |
```
|
51 |
+
|
52 |
+
There is code for scraping and processing in `processing_code`, and a subset of the data with more fields such as author Karma, downvotes and posting time at `processing_code/reddit-2022-10-20-dump.csv`.
|