numbers1234567 commited on
Commit
9f0eba0
1 Parent(s): b2c2c96

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - feature-extraction
4
+ - text-generation
5
+ language:
6
+ - en
7
+ size_categories:
8
+ - 10K<n<100K
9
+ ---
10
+
11
+ # Reddit Popular Dataset
12
+
13
+ Dataset of 10000 posts which appeared on /r/popular on Reddit.
14
+
15
+ ## Dataset Details
16
+
17
+ The Reddit API limits how many posts one can retrieve from a specific subreddit to 1000. This dataset contains data for almost all posts which appeared on /r/popular from *Saturday, July 27, 2024 9:23:51 PM GMT* to *Saturday, August 24, 2024 9:48:19 PM GMT*.
18
+
19
+ Additional data such as comments, scores, and media were obtained by *Friday, November 15, 2024 5:00:00 AM GMT*.
20
+
21
+ ### The Media Directory
22
+
23
+ This is a dump of all media in the dataset. It contains only PNGs.
24
+
25
+ ### ID Files
26
+
27
+ This dataset contains 2 files for identification: **main.csv** and **media.csv**.
28
+
29
+ ### main.csv Fields
30
+
31
+ *main.csv* includes metadata and text data about the post:
32
+
33
+ - post_id: int - A unique, dataset-specific identifier for each post.
34
+ - create_utc: int - The time (in seconds) the post was created, in epoch time.
35
+ - post_url: string - The URL of the post. This can be used to collect further data depending on your purposes.
36
+ - title: string - Title of the post.
37
+ - comment[1-3]: string|nan - The text of the i-th top-scoring comment.
38
+ - comment[1-3]_score: int|nan - The score of the i-th top-scoring comment.
39
+
40
+
41
+ ### media.csv Fields
42
+
43
+ *media.csv* includes identifiers for media:
44
+
45
+ - post_id: int - Identifies the post the media is associated to. Refers to post_id in *main.csv*
46
+ - media_path: str - Locates the file containing the media. This path is relative to *media.csv*'s directory.
47
+
48
+ ## Data Collection
49
+
50
+ Every 2 hours, a routine scraped 200 posts from /r/popular through the Reddit API then saved the URL of every post to a database from about *July 27, 2024* to *August 24, 2024*.
51
+
52
+ The script *collect_all_reddit.py* then created the dataset on *November 15, 2024*.
53
+
54
+ ## Usage Guide
55
+
56
+ This guide uses pandas and PIL to load data:
57
+
58
+ import pandas as pd
59
+ import csv
60
+
61
+ from PIL import Image
62
+
63
+ Load the main and media data using
64
+
65
+ df_main = pd.read_csv("main.csv", sep="\t", quoting=csv.QUOTE_NONE)
66
+ df_media = pd.read_csv("media.csv", sep="\t", quoting=csv.QUOTE_NONE)
67
+
68
+ To create a combined language-image dataset, use an SQL-Like join:
69
+
70
+ df_lang_img = pd.merge(df_main, df_media, how="left", on="post_id")
71
+
72
+ This creates a new dataframe with all the columns from *main.csv* and *media.csv*. In this new dataframe, each post is repeated for each associated image. If a post does not have an image, the *media_path* is NaN.
73
+
74
+ Let's consider one row:
75
+
76
+ row = df_lang_img.iloc[0]
77
+
78
+ Then the image can be loaded with
79
+
80
+ with Image.open(row["media_path"]) as im:
81
+ im.show()