reddit-popular / README.md
numbers1234567's picture
Create README.md
9f0eba0 verified
|
raw
history blame
2.7 kB
metadata
task_categories:
  - feature-extraction
  - text-generation
language:
  - en
size_categories:
  - 10K<n<100K

Reddit Popular Dataset

Dataset of 10000 posts which appeared on /r/popular on Reddit.

Dataset Details

The Reddit API limits how many posts one can retrieve from a specific subreddit to 1000. This dataset contains data for almost all posts which appeared on /r/popular from Saturday, July 27, 2024 9:23:51 PM GMT to Saturday, August 24, 2024 9:48:19 PM GMT.

Additional data such as comments, scores, and media were obtained by Friday, November 15, 2024 5:00:00 AM GMT.

The Media Directory

This is a dump of all media in the dataset. It contains only PNGs.

ID Files

This dataset contains 2 files for identification: main.csv and media.csv.

main.csv Fields

main.csv includes metadata and text data about the post:

  • post_id: int - A unique, dataset-specific identifier for each post.
  • create_utc: int - The time (in seconds) the post was created, in epoch time.
  • post_url: string - The URL of the post. This can be used to collect further data depending on your purposes.
  • title: string - Title of the post.
  • comment[1-3]: string|nan - The text of the i-th top-scoring comment.
  • comment[1-3]_score: int|nan - The score of the i-th top-scoring comment.

media.csv Fields

media.csv includes identifiers for media:

  • post_id: int - Identifies the post the media is associated to. Refers to post_id in main.csv
  • media_path: str - Locates the file containing the media. This path is relative to media.csv's directory.

Data Collection

Every 2 hours, a routine scraped 200 posts from /r/popular through the Reddit API then saved the URL of every post to a database from about July 27, 2024 to August 24, 2024.

The script collect_all_reddit.py then created the dataset on November 15, 2024.

Usage Guide

This guide uses pandas and PIL to load data:

import pandas as pd
import csv

from PIL import Image

Load the main and media data using

df_main = pd.read_csv("main.csv", sep="\t", quoting=csv.QUOTE_NONE)
df_media = pd.read_csv("media.csv", sep="\t", quoting=csv.QUOTE_NONE)

To create a combined language-image dataset, use an SQL-Like join:

df_lang_img = pd.merge(df_main, df_media, how="left", on="post_id")

This creates a new dataframe with all the columns from main.csv and media.csv. In this new dataframe, each post is repeated for each associated image. If a post does not have an image, the media_path is NaN.

Let's consider one row:

row = df_lang_img.iloc[0]

Then the image can be loaded with

with Image.open(row["media_path"]) as im:
    im.show()