The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 6 new columns ({'creation_date', 'date_accessed', 'subscribers', 'current_users', 'appearances', 'time_accessed_UTC'}) and 1 missing columns ({'response_code'}). This happened while the csv dataset builder was generating data using hf://datasets/davidwisdom/reddit-randomness/summary.csv (at revision 01740f7cd9ffa5855819bd828d5dcb03578abf0e) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast subreddit: string subscribers: int64 current_users: int64 creation_date: string date_accessed: string time_accessed_UTC: string appearances: int64 -- schema metadata -- pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1127 to {'subreddit': Value(dtype='string', id=None), 'response_code': Value(dtype='int64', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 6 new columns ({'creation_date', 'date_accessed', 'subscribers', 'current_users', 'appearances', 'time_accessed_UTC'}) and 1 missing columns ({'response_code'}). This happened while the csv dataset builder was generating data using hf://datasets/davidwisdom/reddit-randomness/summary.csv (at revision 01740f7cd9ffa5855819bd828d5dcb03578abf0e) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
subreddit
string | response_code
int64 |
---|---|
changemyview | 302 |
Terraform | 302 |
lostpause | 302 |
USPS | 302 |
MaliciousCompliance | 302 |
BATProject | 302 |
obs | 302 |
apple | 302 |
IndoorGarden | 302 |
Dimension20 | 302 |
honkaiimpact3 | 302 |
chimebank | 302 |
confusing_perspective | 302 |
geopolitics | 302 |
awardtravel | 302 |
Home | 302 |
AmITheDevil | 302 |
runescape | 302 |
benzodiazepines | 302 |
interactivebrokers | 302 |
Nootropics | 302 |
2007scape | 302 |
thesopranos | 302 |
HottestFemaleAthletes | 302 |
SuddenlyGay | 302 |
QuotesPorn | 302 |
formula1 | 302 |
Genshin_Impact | 302 |
Hairloss | 302 |
SomethingWasWrong | 302 |
maryland | 302 |
RPI | 302 |
inkarnate | 302 |
distantsocializing | 302 |
clevercomebacks | 302 |
OnlyFans | 302 |
Discord_Bots | 302 |
UniUK | 302 |
CalamityMod | 302 |
HealthAnxiety | 302 |
CrucibleGuidebook | 302 |
oilpen | 302 |
Chonkers | 302 |
yorku | 302 |
reactiongifs | 302 |
ClassyPornstars | 302 |
MxRMods | 302 |
trashy | 302 |
Porsche | 302 |
Drugs | 302 |
oneplus | 302 |
youseeingthisshit | 302 |
DestinyFashion | 302 |
fastfood | 302 |
Aphantasia | 302 |
FPSAimTrainer | 302 |
CFA | 302 |
comedyhomicide | 302 |
discordapp | 302 |
FedEx | 302 |
TeslaModelY | 302 |
NYStateOfMind | 302 |
beer | 302 |
SubredditDrama | 302 |
SUMC | 302 |
Techno | 302 |
USC | 302 |
AskVet | 302 |
GreenBayPackers | 302 |
FanTheories | 302 |
WidescreenWallpaper | 302 |
PS4Pro | 302 |
datascience | 302 |
French | 302 |
ModernWarzone | 302 |
kitchener | 302 |
CrazyIdeas | 302 |
JuiceWRLD | 302 |
adhdwomen | 302 |
lebanon | 302 |
tasker | 302 |
learnjava | 302 |
halifax | 302 |
kitchener | 302 |
AWSCertifications | 302 |
Music | 302 |
AltStore | 302 |
Beastars | 302 |
malegrooming | 302 |
kansascity | 302 |
japanlife | 302 |
ContraPoints | 302 |
canberra | 302 |
imaginarymaps | 302 |
Tetris | 302 |
BMW | 302 |
travisscott | 302 |
Crunchyroll | 302 |
fightporn | 302 |
Webull | 302 |
Reddit Randomness Dataset
A dataset I created because I was curious about how "random" r/random really is.
This data was collected by sending GET
requests to https://www.reddit.com/r/random
for a few hours on September 19th, 2021.
I scraped a bit of metadata about the subreddits as well.
randomness_12k_clean.csv
reports the random subreddits as they happened and summary.csv
lists some metadata about each subreddit.
The Data
randomness_12k_clean.csv
This file serves as a record of the 12,055 successful results I got from r/random. Each row represents one result.
Fields
subreddit
: The name of the subreddit that the scraper recieved from r/random (string
)response_code
: HTTP response code the scraper recieved when it sent aGET
request to /r/random (int
, always302
)
summary.csv
As the name suggests, this file summarizes randomness_12k_clean.csv
into the information that I cared about when I analyzed this data.
Each row represents one of the 3,679 unique subreddits and includes some stats about the subreddit as well as the number of times it appears in the results.
Fields
subreddit
: The name of the subreddit (string
, unique)subscribers
: How many subscribers the subreddit had (int
, max of99_886
)current_users
: How many users accessed the subreddit in the past 15 minutes (int
, max of999
)creation_date
: Date that the subreddit was created (YYYY-MM-DD
orError:PrivateSub
orError:Banned
)date_accessed
: Date that I collected the values insubscribers
andcurrent_users
(YYYY-MM-DD
)time_accessed_UTC
: Time that I collected the values insubscribers
andcurrent_users
, reported in UTC+0 (HH:MM:SS
)appearances
: How many times the subreddit shows up inrandomness_12k_clean.csv
(int
, max of9
)
Missing Values and Quirks
In the summary.csv
file, there are three missing values.
After I collected the number of subscribers and the number of current users, I went back about a week later to collect the creation date of each subreddit.
In that week, three subreddits had been banned or taken private. I filled in the values with a descriptive string.
- SomethingWasWrong (
Error:PrivateSub
) - HannahowoOnlyfans (
Error:Banned
) - JanetGuzman (
Error:Banned
)
I think there are a few NSFW subreddits in the results, even though I only queried r/random and not r/randnsfw. As a simple example, searching the data for "nsfw" shows that I got the subreddit r/nsfwanimegifs twice.
License
This dataset is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/
- Downloads last month
- 20