Search is not available for this dataset
type
class label 1
class | id
stringlengths 7
7
| subreddit.id
stringclasses 1
value | subreddit.name
stringclasses 1
value | subreddit.nsfw
bool 1
class | created_utc
unknown | permalink
stringlengths 61
109
| body
large_stringlengths 0
9.98k
| sentiment
float32 -1
1
⌀ | score
int32 -65
195
|
---|---|---|---|---|---|---|---|---|---|
1comment
| hx4p4z7 | 2r97t | datasets | false | "2022-02-16T03:56:40Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx4p4z7/ | Of people. Or animal breeds. Probably not rocket ships, or high-rise buildings. | 0 | 3 |
1comment
| hx4nv4i | 2r97t | datasets | false | "2022-02-16T03:46:26Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx4nv4i/ | Nope that’s definitely not what the CLT states. For example: If your dataset is compiled from the number of tails before the first head, then that’s definitely not normal. That’s negative binomial. It doesn’t matter how many records you have or how many times you do the experiment, it will always be negative binomial. CLT says if you take a random selection of those data points and take an average- then those averages will be normally distributed. | 0.0361 | 2 |
1comment
| hx4nm6a | 2r97t | datasets | false | "2022-02-16T03:44:28Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx4nm6a/ | May not satisfy the intent of the exercise.
Seems OP is tasked to find examples of normally-distributed data in real life.
I recall from my work in manufacturing variation simulation (so very many years ago!) quite a few machining operations have normally-distributed dimensional variations.
And no sorry I can’t remember which machining operations! If you can identify one or more operations that tend to have normally-distributed variations, you can probably find data sets with measurements from machined parts taken on an assembly line.
FWIW this is the product that I worked on in the 1980s and still in use….
https://www.geoplm.com/knowledge-base-resources/GEOPLM-Siemens-PLM-Tecnomatix-Variation-Analysis-fs_tcm1023-120264.pdf
Here is a short list of the most common real-life examples of normal distributions IRL:
https://galtonboard.com/probabilityexamplesinlife | -0.4295 | 3 |
1comment
| hx4m2by | 2r97t | datasets | false | "2022-02-16T03:32:19Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx4m2by/ | [deleted] | null | -1 |
1comment
| hx4lrxs | 2r97t | datasets | false | "2022-02-16T03:30:04Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx4lrxs/ | [deleted] | null | 1 |
1comment
| hx4k433 | 2r97t | datasets | false | "2022-02-16T03:16:57Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx4k433/ | This is the way. | 0 | 3 |
1comment
| hx4jjo5 | 2r97t | datasets | false | "2022-02-16T03:12:28Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx4jjo5/ | You actually could create your own normally distributed data using almost any dataset. The Central limit theorem says if you average a random selection of data from any set that has a defined Mean and Variance then that average would be normally distributed. So just create a bunch of averages and it should follow a normal distribution. | 0.539 | 8 |
1comment
| hx4gug4 | 2r97t | datasets | false | "2022-02-16T02:51:59Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx4gug4/ | Height and weight data tends to be normally distributed.
[Check this dataset out.](https://www.kaggle.com/mustafaali96/weight-height) | 0 | 6 |
1comment
| hx3smul | 2r97t | datasets | false | "2022-02-15T23:51:34Z" | https://old.reddit.com/r/datasets/comments/stga82/what_zillow_dataset_backs_their_home_values_page/hx3smul/ | Try r/Zillow | null | 3 |
1comment
| hx33vs9 | 2r97t | datasets | false | "2022-02-15T21:09:05Z" | https://old.reddit.com/r/datasets/comments/st5n7i/need_a_dataset_with_at_least_20_pairs_of_data_for/hx33vs9/ | [deleted] | null | 1 |
1comment
| hx2yk7d | 2r97t | datasets | false | "2022-02-15T20:36:17Z" | https://old.reddit.com/r/datasets/comments/o0doy6/looking_for_a_person_able_to_scrape_crunchbase_vc/hx2yk7d/ | How much are they charging for the whole dataset? (Feel free to dm if you prefer :-)) | 0.7964 | 1 |
1comment
| hx2evi8 | 2r97t | datasets | false | "2022-02-15T18:30:32Z" | https://old.reddit.com/r/datasets/comments/st39oo/dataset_for_yearly_global_cases_of_covid/hx2evi8/ | checkout this [repo](https://github.com/CSSEGISandData/COVID-19) that is maintained by the John Hopkins University Center for Systems Science and Engineering | 0 | 1 |
1comment
| hx2deww | 2r97t | datasets | false | "2022-02-15T18:21:19Z" | https://old.reddit.com/r/datasets/comments/st4jtw/how_to_scrape_twitter_tweets_of_the_accounts_i_am/hx2deww/ | Think the easiest solution would be pulling names of accounts you're following into list and then just using that as the user names they are searching in the tutorial, otherwise there is a lot of applications out there to scrapes follower list, and you could pull your own follower list with that.
Can read more in depth tonight when I'm done with classes, but the main points with this would work on any public Twitter account | 0.3716 | 2 |
1comment
| hx2asg2 | 2r97t | datasets | false | "2022-02-15T18:04:47Z" | https://old.reddit.com/r/datasets/comments/st4jtw/how_to_scrape_twitter_tweets_of_the_accounts_i_am/hx2asg2/ | Wow nice tutorial, however, would it work on the people I follow? Like can I adjust it so that it pulls tweets from all the people I follow on my account for 2/15/2022? | 0.8577 | 1 |
1comment
| hx28mni | 2r97t | datasets | false | "2022-02-15T17:51:09Z" | https://old.reddit.com/r/datasets/comments/st67ox/computer_science_programs_graduate_university/hx28mni/ | Thanks for your reply! I indeed meant graduate programs. I think what you're saying is exactly what I figured; I just hoped somebody would have gone through the effort of doing this painful work of standardizing and putting everything together.
I guess I will try to stick with Times Higher Education then (although this misses some important features) | 0.2714 | 1 |
1comment
| hx27wrt | 2r97t | datasets | false | "2022-02-15T17:46:39Z" | https://old.reddit.com/r/datasets/comments/st67ox/computer_science_programs_graduate_university/hx27wrt/ | Graduate courses or programs? As in actual classes or actual Masters program information.
But the dataset will pretty much be impossible to find because 1)if classes, change every year, especially in CS. Depending on how in-demand the course is, it'll get moved around every year.
2)Each university releases information about their grad programs on PDFs normally with no standardization. Plus you didn't mention a country so that is another factor. Acceptance rates and tuition changes literally every year and it also depends if you do labs, co-op, honours, etc. | 0.8957 | 2 |
1comment
| hx1zb4p | 2r97t | datasets | false | "2022-02-15T16:52:10Z" | https://old.reddit.com/r/datasets/comments/st4jtw/how_to_scrape_twitter_tweets_of_the_accounts_i_am/hx1zb4p/ | Here is a basic tutorial using tweepy library in python. I think that it would be easy even as a new programmer to use it to do what you want. Hope it helps:
https://towardsdatascience.com/how-to-scrape-tweets-from-twitter-59287e20f0f1 | 0.8271 | 1 |
1comment
| hx1u7yw | 2r97t | datasets | false | "2022-02-15T16:19:17Z" | https://old.reddit.com/r/datasets/comments/st5n7i/need_a_dataset_with_at_least_20_pairs_of_data_for/hx1u7yw/ | thank u so much | 0.3612 | 0 |
1comment
| hx1sx1h | 2r97t | datasets | false | "2022-02-15T16:10:28Z" | https://old.reddit.com/r/datasets/comments/st5n7i/need_a_dataset_with_at_least_20_pairs_of_data_for/hx1sx1h/ | Here is a relatively simple one: https://www.kaggle.com/yasserh/student-marks-dataset
This is a set of students grades. the columns are Number of courses, Time spent studying per day, and overall marks for the student.
The easiest way to use Kaggle is to go to their datasets page (https://www.kaggle.com/datasets) and scroll around until you see an interesting one. Then once you click on it the page does have a lot of information but the important part is the "Data Explorer" shown in bold on the left side. That lets you peak at the data to get an idea of what it looks like. It will look like a spreadsheet with weird graphs at the top. you can ignore those, they just provide some information about how the data is distributed. | 0.8979 | 3 |
1comment
| hx1qjp5 | 2r97t | datasets | false | "2022-02-15T15:54:33Z" | https://old.reddit.com/r/datasets/comments/st5n7i/need_a_dataset_with_at_least_20_pairs_of_data_for/hx1qjp5/ | https://data.cdc.gov/NCHS/Provisional-COVID-19-Death-Counts-by-Age-in-Years-/3apk-4u4f/data | null | 2 |
1comment
| hx1pycq | 2r97t | datasets | false | "2022-02-15T15:50:38Z" | https://old.reddit.com/r/datasets/comments/st39oo/dataset_for_yearly_global_cases_of_covid/hx1pycq/ | Check out our world in data | 0 | 1 |
1comment
| hx1nqsx | 2r97t | datasets | false | "2022-02-15T15:35:52Z" | https://old.reddit.com/r/datasets/comments/st5n7i/need_a_dataset_with_at_least_20_pairs_of_data_for/hx1nqsx/ | Hey gogibea,
I believe a `request` flair might be more appropriate for such post. Please re-consider and change the post flair if needed.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/datasets) if you have any questions or concerns.* | 0.5574 | 1 |
1comment
| hx18h66 | 2r97t | datasets | false | "2022-02-15T13:44:37Z" | https://old.reddit.com/r/datasets/comments/st39oo/dataset_for_yearly_global_cases_of_covid/hx18h66/ | Hey itslolen,
I believe a `request` flair might be more appropriate for such post. Please re-consider and change the post flair if needed.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/datasets) if you have any questions or concerns.* | 0.5574 | 1 |
1comment
| hx180x6 | 2r97t | datasets | false | "2022-02-15T13:40:57Z" | https://old.reddit.com/r/datasets/comments/sgbkzf/global_air_pollution_for_years_20052020/hx180x6/ | Hi, greatly appreciated but I've already found what I was looking for
Thank you! | 0.7039 | 1 |
1comment
| hx0z3ot | 2r97t | datasets | false | "2022-02-15T12:17:53Z" | https://old.reddit.com/r/datasets/comments/ssq731/classification_dataset_for_assignment/hx0z3ot/ | Open data NYC should have what you're looking for, crime statistics in particular: https://opendata.cityofnewyork.us/ | -0.5423 | 1 |
1comment
| hx0pvog | 2r97t | datasets | false | "2022-02-15T10:22:48Z" | https://old.reddit.com/r/datasets/comments/qapr0n/trees_image_dataset_needed_for_model_training/hx0pvog/ | [removed] | null | 1 |
1comment
| hx0idrr | 2r97t | datasets | false | "2022-02-15T08:39:32Z" | https://old.reddit.com/r/datasets/comments/ssye4i/dataset_release_optimizing_an_optimal_diet/hx0idrr/ | Hey yamqwe,
I believe a `request` flair might be more appropriate for such post. Please re-consider and change the post flair if needed.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/datasets) if you have any questions or concerns.* | 0.5574 | 1 |
1comment
| hx07t8c | 2r97t | datasets | false | "2022-02-15T06:24:51Z" | https://old.reddit.com/r/datasets/comments/ss5vj8/fun_dataset_tiktok_trending_tracks/hx07t8c/ | Hey thanks for sharing! I hve. uestion about the track metadata- Are attributes like "danceability" and "trackID" features from spotify? | 0.8172 | 2 |
1comment
| hwzlf33 | 2r97t | datasets | false | "2022-02-15T02:57:25Z" | https://old.reddit.com/r/datasets/comments/ssslrm/im_looking_for_a_marketing_data_set_of_a_globally/hwzlf33/ | Hey theanalyst24,
I believe a `request` flair might be more appropriate for such post. Please re-consider and change the post flair if needed.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/datasets) if you have any questions or concerns.* | 0.5574 | 1 |
1comment
| hwyj4xv | 2r97t | datasets | false | "2022-02-14T21:34:50Z" | https://old.reddit.com/r/datasets/comments/ss5vj8/fun_dataset_tiktok_trending_tracks/hwyj4xv/ | thanks for the share! | 0.6588 | 1 |
1comment
| hwyedgv | 2r97t | datasets | false | "2022-02-14T21:02:36Z" | https://old.reddit.com/r/datasets/comments/ssfy1q/statistics_on_lottery_numbers_picked_by_hand/hwyedgv/ | You could try sending a FOIA request to your State's lottery gaming commission, to obtain datasets from previously played lotteries. Then they will freak out and have their general counsel serve you with paperwork about how the lottery is exempt from FOIA reporting etc. It's the PRNG they are using in these gaming terminals to rip off everyone, when was the last time you saw someone bubble in a lottery card at WaWa. Everything's a quick pick now, let the computer choose your numbers for you. Makes sense. | -0.0258 | 0 |
1comment
| hwyag25 | 2r97t | datasets | false | "2022-02-14T20:36:09Z" | https://old.reddit.com/r/datasets/comments/somgid/number_of_ikea_stores_per_country_by_year/hwyag25/ | It wouldn't be fully accurate because it would not account for stores that have closed, but you could build a decent dataset by scraping all locations for each country from the Ikea country page, e.g. [https://www.ikea.com/us/en/stores/](https://www.ikea.com/us/en/stores/). Then for each store, scraping google results for "ikea <store location> opening date". Once you have all store opening dates, you could then loop over country by year and count rows with a date less than that year to get the format you specified above. | 0 | 1 |
1comment
| hwy5nww | 2r97t | datasets | false | "2022-02-14T20:03:43Z" | https://old.reddit.com/r/datasets/comments/ssfy1q/statistics_on_lottery_numbers_picked_by_hand/hwy5nww/ | I ran a calculation on this using maximum entropy to model the numbers people shun.
Theres papers on this Using Maximum Entropy to Double One's Expected Winnings in the UK National Lottery is one I remember[https://www.jstor.org/stable/2988365](https://www.jstor.org/stable/2988365)
plus.maths John Haigh had calculations on this too [https://plus.maths.org/content/uk-national-lottery-guide-beginners](https://plus.maths.org/content/uk-national-lottery-guide-beginners)
The basic stuff that i remember jumping out was
people pick numbers lower than 32 as they are birthdays and perhaps house numbers. They dont pick two or three numbers in a row as these dont look random.
&#x200B;
\*edit at the time euromillions and I think powerball had websites of numbers and amounts won by different levels of prze winners each draw. | 0.8442 | 3 |
1comment
| hwy1kk5 | 2r97t | datasets | false | "2022-02-14T19:36:03Z" | https://old.reddit.com/r/datasets/comments/ssfy1q/statistics_on_lottery_numbers_picked_by_hand/hwy1kk5/ | [Skip Garibaldi](https://en.wikipedia.org/wiki/Skip_Garibaldi) has done work on the lottery, but I think the upshot is people pick dates as numbers, so while those numbers are no more likely to hit (assuming fair draws), conditional on hitting the pots are more likely to be split | 0.0387 | 3 |
1comment
| hwxw8kj | 2r97t | datasets | false | "2022-02-14T19:00:17Z" | https://old.reddit.com/r/datasets/comments/ss5vj8/fun_dataset_tiktok_trending_tracks/hwxw8kj/ | Where did you pull this list of trending TikTok tracks? | 0 | 4 |
1comment
| hwxurdh | 2r97t | datasets | false | "2022-02-14T18:50:33Z" | https://old.reddit.com/r/datasets/comments/ssi3ea/data_exploration_dashboard_and_survey_vitamin_d/hwxurdh/ | All survey's have to be verified by the moderators for compliance with the rules. Your survey must include a publicly accessible resource to view responses and must NOT collect personal information. We verify each survey posted for compliance which may take a few days. Once approved you will be free to re-post your survey if desired. If this post is NOT a survey and removed in error we apologize. The Automod config looks for the word "survey" and auto removes the post for moderation, we will get to approving your post as soon as possible.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/datasets) if you have any questions or concerns.* | 0.802 | 1 |
1comment
| hwxm1ue | 2r97t | datasets | false | "2022-02-14T17:50:56Z" | https://old.reddit.com/r/datasets/comments/sq7jw2/recommendation_for_venture_capital_datasets/hwxm1ue/ | The dataset you want is pretty hard to get. [Pitchbook](https://pitchbook.com/) makes a ton of money gatekeeping it haha | 0.7269 | 2 |
1comment
| hwxksj0 | 2r97t | datasets | false | "2022-02-14T17:42:13Z" | https://old.reddit.com/r/datasets/comments/sq7jw2/recommendation_for_venture_capital_datasets/hwxksj0/ | My dumbass doesn't understand that sentence 😣 | -0.5889 | 1 |
1comment
| hwxkpp2 | 2r97t | datasets | false | "2022-02-14T17:41:40Z" | https://old.reddit.com/r/datasets/comments/sq7jw2/recommendation_for_venture_capital_datasets/hwxkpp2/ | Thank you for your advice.
My lecturer required me to have substantial amount of data for the ML project and I am not skilled enough to build a scraper in attaining that data from crunch base. | 0.5106 | 1 |
1comment
| hwxbiru | 2r97t | datasets | false | "2022-02-14T16:39:17Z" | https://old.reddit.com/r/datasets/comments/ssbs7k/dataset_of_historical_youtube_trending_videos/hwxbiru/ | you would not | 0 | 1 |
1comment
| hwx912b | 2r97t | datasets | false | "2022-02-14T16:22:37Z" | https://old.reddit.com/r/datasets/comments/ssbs7k/dataset_of_historical_youtube_trending_videos/hwx912b/ | [deleted] | null | 1 |
1comment
| hwx6e63 | 2r97t | datasets | false | "2022-02-14T16:04:33Z" | https://old.reddit.com/r/datasets/comments/ssbs7k/dataset_of_historical_youtube_trending_videos/hwx6e63/ | You can create a scrapper yourself, there is a tutorial on YouTube | 0.2732 | 0 |
1comment
| hwwvt0l | 2r97t | datasets | false | "2022-02-14T14:47:48Z" | https://old.reddit.com/r/datasets/comments/sgbkzf/global_air_pollution_for_years_20052020/hwwvt0l/ | Hi,
are you still searching?
We have a dataset that covers the years from 2005-2019 for multiple pollutants which looks like this:
[https://app.fusionbase.com/share/41180153](https://app.fusionbase.com/share/41180153) (just a preview)
If this dataset would help you, let me know then I'll give you full access to it (for free, no sales etc. involved).
Best | 0.8885 | 2 |
1comment
| hwwo7og | 2r97t | datasets | false | "2022-02-14T13:41:57Z" | https://old.reddit.com/r/datasets/comments/ss9dif/linkedin_big_five_personality_traits_dataset/hwwo7og/ | Just want to provide my opinion as psychologist. The Big Five test bases on a standardized questionnaire. For predicting traits of LinkedIn Users you would need a dataset containing LinkedIn users with already answered questionnaires.
If you‘re going to make predictions of user profiles without questionnaire answers, you‘ll be predicting anything - but not Big Five personality traits.
Keep in mind that this is a test that the person has to fill in himself. Answering the questionnaire in his name or trying to answer it with predictions is not what the test is made for. | 0.0387 | 7 |
1comment
| hwwlo1r | 2r97t | datasets | false | "2022-02-14T13:18:34Z" | https://old.reddit.com/r/datasets/comments/ss5vj8/fun_dataset_tiktok_trending_tracks/hwwlo1r/ | On one side any data sharing is welcomed. On the other side, TikTok is a cancer, we shouldn't promote it anywhere. | -0.3365 | -2 |
1comment
| hww4z3n | 2r97t | datasets | false | "2022-02-14T09:53:49Z" | https://old.reddit.com/r/datasets/comments/ss5vj8/fun_dataset_tiktok_trending_tracks/hww4z3n/ | Nice work! | null | 1 |
1comment
| hwvznjw | 2r97t | datasets | false | "2022-02-14T08:37:34Z" | https://old.reddit.com/r/datasets/comments/ss5vj8/fun_dataset_tiktok_trending_tracks/hwvznjw/ | Hey yamqwe,
I believe a `request` flair might be more appropriate for such post. Please re-consider and change the post flair if needed.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/datasets) if you have any questions or concerns.* | 0.5574 | 1 |
1comment
| hwu25t0 | 2r97t | datasets | false | "2022-02-13T22:27:13Z" | https://old.reddit.com/r/datasets/comments/srogmq/sentiment_analysis_dataset_for_linkedin_posts_and/hwu25t0/ | Totally fine with scraping, given that the site doesn't give me too much trouble. The main thing is that I need good sentiment labels, and forum style text written by people in a professional setting
Ex:
"""
Person A: "Your performance this year was okay. You can improve in a few areas."
LABEL: Negative
"""
The issue is that negatively classified tweets are a bit too dramatic (i.e, "GUNSHOTS OUTSIDE, SO SCARED D;") for the data I need my models to generalize to.
Maybe I'll try my luck with Mturk lol | 0.1579 | 2 |
1comment
| hwtvv18 | 2r97t | datasets | false | "2022-02-13T21:43:30Z" | https://old.reddit.com/r/datasets/comments/srogmq/sentiment_analysis_dataset_for_linkedin_posts_and/hwtvv18/ | I’m not sure about this one… they may have an API to connect to but their data is likely not going to be available unless you scrape it.
What’s a sample you’re looking for? I’d recommend sourcing on other forums or sites like Glassdoor or indeed or levels.fyi which may have more available - LinkedIn can be annoying to deal with | 0.3546 | 2 |
1comment
| hwt9x4w | 2r97t | datasets | false | "2022-02-13T19:17:49Z" | https://old.reddit.com/r/datasets/comments/r9kghy/could_someone_with_baidu_account_please_download/hwt9x4w/ | Pm | null | 1 |
1comment
| hwt9vxw | 2r97t | datasets | false | "2022-02-13T19:17:36Z" | https://old.reddit.com/r/datasets/comments/qvx0x9/need_a_chinese_resident_or_someone_possessing_a/hwt9vxw/ | Pm | null | 1 |
1comment
| hwt0j8d | 2r97t | datasets | false | "2022-02-13T18:16:21Z" | https://old.reddit.com/r/datasets/comments/srogmq/sentiment_analysis_dataset_for_linkedin_posts_and/hwt0j8d/ | I've seen this one floating around. Is there a version that has human labeled sentiment scores? | 0 | 1 |
1comment
| hwsxlt0 | 2r97t | datasets | false | "2022-02-13T17:57:09Z" | https://old.reddit.com/r/datasets/comments/srok2q/pinned_insect_digitisation_from_the_natural/hwsxlt0/ | They did an AMa here a while ago if digitising nature is of interest to you https://www.reddit.com/r/datasets/comments/m0hr81/we\_are\_digitisers\_at\_the\_natural\_history\_museum/ | 0.4588 | 1 |
1comment
| hwsxboj | 2r97t | datasets | false | "2022-02-13T17:55:18Z" | https://old.reddit.com/r/datasets/comments/srogmq/sentiment_analysis_dataset_for_linkedin_posts_and/hwsxboj/ | Behold at the enron emails dataset, t might beest what thou art looking f'r
***
^(I am a bot and I swapp'd some of thy words with Shakespeare words.)
Commands: `!ShakespeareInsult`, `!fordo`, `!optout` | 0 | -1 |
1comment
| hwsxaez | 2r97t | datasets | false | "2022-02-13T17:55:04Z" | https://old.reddit.com/r/datasets/comments/srogmq/sentiment_analysis_dataset_for_linkedin_posts_and/hwsxaez/ | Look at the Enron emails dataset, it might be what you are looking for. | 0 | 2 |
1comment
| hwsry31 | 2r97t | datasets | false | "2022-02-13T17:19:59Z" | https://old.reddit.com/r/datasets/comments/rtxpp1/fuel_consumption_datasets_for_analysis/hwsry31/ | thanks a ton this woks for now | 0.4404 | 1 |
1comment
| hws9769 | 2r97t | datasets | false | "2022-02-13T15:10:55Z" | https://old.reddit.com/r/datasets/comments/srkp9m/dataset_predicting_student_performance/hws9769/ | Hey yamqwe,
I believe a `request` flair might be more appropriate for such post. Please re-consider and change the post flair if needed.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/datasets) if you have any questions or concerns.* | 0.5574 | 1 |
1comment
| hwr78lc | 2r97t | datasets | false | "2022-02-13T08:15:17Z" | https://old.reddit.com/r/datasets/comments/srcdi3/dataset_india_power_infrastructure_data/hwr78lc/ | Yes, It's public data. The data is taken from the RBI Annual Publication "HANDBOOK OF STATISTICS ON INDIAN STATES". I manually downloaded 4 files for the 4 metrics/columns in the data. Then used some python to transform the data and melted it to a long format rather than a wide format. | 0.4019 | 2 |
1comment
| hwr3sen | 2r97t | datasets | false | "2022-02-13T07:32:40Z" | https://old.reddit.com/r/datasets/comments/srcdi3/dataset_india_power_infrastructure_data/hwr3sen/ | Is this public data? Did u scrape it? | 0 | 1 |
1comment
| hwp4a9i | 2r97t | datasets | false | "2022-02-12T21:43:18Z" | https://old.reddit.com/r/datasets/comments/sqx538/nfl_playerteam_statistics_datasheets_galore/hwp4a9i/ | Thanks! | null | 2 |
1comment
| hwp49eu | 2r97t | datasets | false | "2022-02-12T21:43:08Z" | https://old.reddit.com/r/datasets/comments/sqx538/nfl_playerteam_statistics_datasheets_galore/hwp49eu/ | NFL.com | null | 2 |
1comment
| hwp00my | 2r97t | datasets | false | "2022-02-12T21:14:31Z" | https://old.reddit.com/r/datasets/comments/sqx538/nfl_playerteam_statistics_datasheets_galore/hwp00my/ | [removed] | null | 1 |
1comment
| hwomwyt | 2r97t | datasets | false | "2022-02-12T19:46:05Z" | https://old.reddit.com/r/datasets/comments/sqx538/nfl_playerteam_statistics_datasheets_galore/hwomwyt/ | Whoa, cool! Thanks for sharing! What was your source for stats? | 0.8217 | 1 |
1comment
| hwo6sky | 2r97t | datasets | false | "2022-02-12T17:55:48Z" | https://old.reddit.com/r/datasets/comments/sqx538/nfl_playerteam_statistics_datasheets_galore/hwo6sky/ | Hey GucciButtcheeks,
I believe a `request` flair might be more appropriate for such post. Please re-consider and change the post flair if needed.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/datasets) if you have any questions or concerns.* | 0.5574 | 1 |
1comment
| hwmjktj | 2r97t | datasets | false | "2022-02-12T09:29:45Z" | https://old.reddit.com/r/datasets/comments/sqn19n/new_course_on_tensorflow_and_keras_by_opencv/hwmjktj/ | Could you give a bit more details about how this might be of interest to /r/datasets people? | 0.4588 | 1 |
1comment
| hwmb7qu | 2r97t | datasets | false | "2022-02-12T07:51:51Z" | https://old.reddit.com/r/datasets/comments/sqma7g/dataset_the_generic_conspiracist_beliefs_scale/hwmb7qu/ | Hey yamqwe,
I believe a `request` flair might be more appropriate for such post. Please re-consider and change the post flair if needed.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/datasets) if you have any questions or concerns.* | 0.5574 | 1 |
1comment
| hwm8bjx | 2r97t | datasets | false | "2022-02-12T07:20:26Z" | https://old.reddit.com/r/datasets/comments/sq7jw2/recommendation_for_venture_capital_datasets/hwm8bjx/ | Crunchbase? | null | 2 |
1comment
| hwl03de | 2r97t | datasets | false | "2022-02-12T00:57:11Z" | https://old.reddit.com/r/datasets/comments/spw8jb/skin_types_normal_dry_oily_dataset/hwl03de/ | If you found an entry with positive dry and oily, that's me. | 0.5574 | 1 |
1comment
| hwkesxy | 2r97t | datasets | false | "2022-02-11T22:28:04Z" | https://old.reddit.com/r/datasets/comments/sq35cw/dataset_on_viewership_numbers_nfl_request/hwkesxy/ | [deleted] | null | 1 |
1comment
| hwkeejt | 2r97t | datasets | false | "2022-02-11T22:25:30Z" | https://old.reddit.com/r/datasets/comments/sq35cw/dataset_on_viewership_numbers_nfl_request/hwkeejt/ | I'd doubt there's anything worth looking at here. Football isn't played in a spreadsheet /s | -0.3891 | 1 |
1comment
| hwkdnum | 2r97t | datasets | false | "2022-02-11T22:20:46Z" | https://old.reddit.com/r/datasets/comments/sq7jw2/recommendation_for_venture_capital_datasets/hwkdnum/ | so begins the movement to open source pitchbook | 0 | 3 |
1comment
| hwk63dj | 2r97t | datasets | false | "2022-02-11T21:32:35Z" | https://old.reddit.com/r/datasets/comments/sq2cnt/any_dataset_related_to_valorant_game/hwk63dj/ | the [link](https://www.reddit.com/r/VALORANT/comments/k1nii6/valorant_dataset/?utm_source=share&utm_medium=web2x&context=3) has the same issue on valorant subreddit. there are some recommendations | 0 | 1 |
1comment
| hwjmwv9 | 2r97t | datasets | false | "2022-02-11T19:33:41Z" | https://old.reddit.com/r/datasets/comments/sq7jw2/recommendation_for_venture_capital_datasets/hwjmwv9/ | Hey LightSithLord,
I believe a `request` flair might be more appropriate for such post. Please re-consider and change the post flair if needed.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/datasets) if you have any questions or concerns.* | 0.5574 | 1 |
1comment
| hwhqf39 | 2r97t | datasets | false | "2022-02-11T11:27:51Z" | https://old.reddit.com/r/datasets/comments/osjecd/spss_data_analysis_help_for_masters_dissertation/hwhqf39/ | Hey there,
I don't know much about [SPSS data analysis](https://silverlakeconsult.com/spss-data-analysis/). But I heard about this tool
that this is the best tool for data analysis and experts will advise
others to use this tool. In my opinion, if you really wanna help with
these tools I suggest you consult with professionals like **Silver Lake Consulting** and **SPSS-tutor**. They will assist you there best related to these issues. | 0.9675 | 1 |
1comment
| hwglcby | 2r97t | datasets | false | "2022-02-11T03:44:42Z" | https://old.reddit.com/r/datasets/comments/spfxir/what_are_some_nice_survival_analysis_datasets_for/hwglcby/ | Also i think Case Western University has some failure data https://engineering.case.edu/bearingdatacenter | -0.5106 | 2 |
1comment
| hwgl0m5 | 2r97t | datasets | false | "2022-02-11T03:42:11Z" | https://old.reddit.com/r/datasets/comments/spfxir/what_are_some_nice_survival_analysis_datasets_for/hwgl0m5/ | I have used the predictive maintenance dataset for my phd. It’s a well crafted dataset | 0.2732 | 2 |
1comment
| hwgf6rf | 2r97t | datasets | false | "2022-02-11T02:57:52Z" | https://old.reddit.com/r/datasets/comments/spfxir/what_are_some_nice_survival_analysis_datasets_for/hwgf6rf/ | Thanks. These links are very useful. Sorry - i dont know any such dataset and can't contribute. | 0.672 | 2 |
1comment
| hwg8x3f | 2r97t | datasets | false | "2022-02-11T02:10:44Z" | https://old.reddit.com/r/datasets/comments/siukmm/precision_health_data_set_for_rstudio/hwg8x3f/ | Check out this package: https://cran.r-project.org/web/packages/MLDataR/vignettes/MLDataR.html | 0 | 1 |
1comment
| hwfsmml | 2r97t | datasets | false | "2022-02-11T00:09:02Z" | https://old.reddit.com/r/datasets/comments/spfxir/what_are_some_nice_survival_analysis_datasets_for/hwfsmml/ | I've actually contributed to the pcoe datasets but it was more along the lines of IoT data. Not a useful comment, just glad to see the pcoe reference | 0.2242 | 4 |
1comment
| hwfijjr | 2r97t | datasets | false | "2022-02-10T22:57:55Z" | https://old.reddit.com/r/datasets/comments/somgid/number_of_ikea_stores_per_country_by_year/hwfijjr/ | It's just a request. I have an interesting idea that I need it for. | 0.4019 | 1 |
1comment
| hwfhzy5 | 2r97t | datasets | false | "2022-02-10T22:54:06Z" | https://old.reddit.com/r/datasets/comments/hmsk2w/shared_crunchbase_pro_or_group_buy/hwfhzy5/ | i want to join in this | 0.3612 | 1 |
1comment
| hwetumb | 2r97t | datasets | false | "2022-02-10T20:01:52Z" | https://old.reddit.com/r/datasets/comments/somgid/number_of_ikea_stores_per_country_by_year/hwetumb/ | Bro why would anyone have this | 0 | 1 |
1comment
| hwejpuc | 2r97t | datasets | false | "2022-02-10T19:01:13Z" | https://old.reddit.com/r/datasets/comments/8qj7ej/bureau_of_labor_statistics_real_earnings_summary/hwejpuc/ | This is the best tl;dr I could make, [original](https://www.bls.gov/news.release/realer.nr0.htm) reduced by 80%. (I'm a bot)
*****
> Real average weekly earnings decreased 0.5 percent over the month due to the change in real average hourly earnings combined with a decrease of 0.6 percent in the average workweek.
> The change in real average hourly earnings combined with a decrease of 1.4 percent in the average workweek resulted in a 3.1-percent decrease in real average weekly earnings over this period.
> Real average weekly earnings decreased 0.6 percent over the month due to the unchanged real average hourly earnings being combined with a decrease of 0.6 percent in average weekly hours.
*****
[**Extended Summary**](http://np.reddit.com/r/autotldr/comments/spe4ar/bls_reports_that_real_average_hourly_earnings/) | [FAQ](http://np.reddit.com/r/autotldr/comments/31b9fm/faq_autotldr_bot/ "Version 2.02, ~622662 tl;drs so far.") | [Feedback](http://np.reddit.com/message/compose?to=%23autotldr "PM's and comments are monitored, constructive feedback is welcome.") | *Top* *keywords*: **Earners**^#1 **average**^#2 **REAL**^#3 **percent**^#4 **hourly**^#5 | 0.8402 | 1 |
1comment
| hwee6ze | 2r97t | datasets | false | "2022-02-10T18:28:20Z" | https://old.reddit.com/r/datasets/comments/spd1f6/automotive_repair_dataset_for_consumption/hwee6ze/ | Bad bot.
> report me for spam, I submit unsolicited comments
sure, no problem. | -0.3489 | 1 |
1comment
| hweccwe | 2r97t | datasets | false | "2022-02-10T18:16:56Z" | https://old.reddit.com/r/datasets/comments/spd1f6/automotive_repair_dataset_for_consumption/hweccwe/ | Thank you, roadwaywarrior, for voting on AutoModerator.
This bot wants to find the best and worst bots on Reddit. [You can view results here](https://botrank.pastimes.eu/).
***
^(Even if I don't reply to your comment, I'm still listening for votes. Check the webpage to see if your vote registered!) | 0.4389 | 1 |
1comment
| hwecblx | 2r97t | datasets | false | "2022-02-10T18:16:43Z" | https://old.reddit.com/r/datasets/comments/spd1f6/automotive_repair_dataset_for_consumption/hwecblx/ | Good bot | null | 1 |
1comment
| hwec412 | 2r97t | datasets | false | "2022-02-10T18:15:25Z" | https://old.reddit.com/r/datasets/comments/spd1f6/automotive_repair_dataset_for_consumption/hwec412/ | Hey roadwaywarrior,
I believe a `request` flair might be more appropriate for such post. Please re-consider and change the post flair if needed.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/datasets) if you have any questions or concerns.* | 0.5574 | 1 |
1comment
| hwe68lg | 2r97t | datasets | false | "2022-02-10T17:39:49Z" | https://old.reddit.com/r/datasets/comments/soyeab/q_anyone_have_access_to_statista_and_help_out_a/hwe68lg/ | I can't dear
I need someone to send | -0.2924 | -3 |
1comment
| hwe6661 | 2r97t | datasets | false | "2022-02-10T17:39:23Z" | https://old.reddit.com/r/datasets/comments/soyeab/q_anyone_have_access_to_statista_and_help_out_a/hwe6661/ | They couldn't I need someone to send | 0 | -2 |
1comment
| hwdjl45 | 2r97t | datasets | false | "2022-02-10T15:20:45Z" | https://old.reddit.com/r/datasets/comments/soyeab/q_anyone_have_access_to_statista_and_help_out_a/hwdjl45/ | Contact you library. They might be able to help you. | 0.4019 | 4 |
1comment
| hwdha4i | 2r97t | datasets | false | "2022-02-10T15:05:47Z" | https://old.reddit.com/r/datasets/comments/soyeab/q_anyone_have_access_to_statista_and_help_out_a/hwdha4i/ | You should be able to access them with your university account. Have you tried that? | 0 | 3 |
1comment
| hwc9pq8 | 2r97t | datasets | false | "2022-02-10T07:32:01Z" | https://old.reddit.com/r/datasets/comments/soej64/dataset_of_fun_or_interesting_facts/hwc9pq8/ | Pretty much verbatim. The boss wanted me to add fun facts etc to a current chat bot | 0.7579 | 1 |
1comment
| hw9ypt7 | 2r97t | datasets | false | "2022-02-09T21:04:10Z" | https://old.reddit.com/r/datasets/comments/soej64/dataset_of_fun_or_interesting_facts/hw9ypt7/ | Are you going to spit out the facts verbatim, or are you using a GPT-like text generator to generate "fun fact"-style nonsense? | 0.1531 | 0 |
1comment
| hw9faas | 2r97t | datasets | false | "2022-02-09T19:04:05Z" | https://old.reddit.com/r/datasets/comments/sogrbk/predicting_student_performance_dataset/hw9faas/ | Checkk out this student performanc dataset
https://archive.ics.uci.edu/ml/datasets/student+performance | 0 | 1 |
1comment
| hw8ul1s | 2r97t | datasets | false | "2022-02-09T16:57:50Z" | https://old.reddit.com/r/datasets/comments/sogrbk/predicting_student_performance_dataset/hw8ul1s/ | You could try rate my professor, there are some datasets circling with that and students sometimes post their grade as part of it | 0 | 1 |
1comment
| hw8ohav | 2r97t | datasets | false | "2022-02-09T16:20:10Z" | https://old.reddit.com/r/datasets/comments/sogrbk/predicting_student_performance_dataset/hw8ohav/ | Thanks for the advice... would you know of any website where students voluntarily post their academic profiles? Something like LinkedIn but with GPA and grades maybe? | 0.4696 | 1 |
1comment
| hw8neew | 2r97t | datasets | false | "2022-02-09T16:13:16Z" | https://old.reddit.com/r/datasets/comments/snfr2c/lets_create_a_data_sharing_community/hw8neew/ | I will definitely consider it. How do you feel about existing portals such as data.world? | 0.4019 | 1 |
1comment
| hw8l6dn | 2r97t | datasets | false | "2022-02-09T15:59:01Z" | https://old.reddit.com/r/datasets/comments/sogrbk/predicting_student_performance_dataset/hw8l6dn/ | Unfortunately at least in the US this data is protected by FERPA laws and you are unlikely to find a public dataset with it.
If you are in a school you may be able to work with the school to do the project but it’s not easy, I tried with my university and was never able to get the data | -0.4327 | 2 |
1comment
| hw8b9mv | 2r97t | datasets | false | "2022-02-09T14:53:20Z" | https://old.reddit.com/r/datasets/comments/soej64/dataset_of_fun_or_interesting_facts/hw8b9mv/ | Thank you! I'll check it out. I mainly need it as a fact generator for a chat bot. A fact a day kinda thing. | 0.4199 | 2 |
1comment
| hw87iol | 2r97t | datasets | false | "2022-02-09T14:26:58Z" | https://old.reddit.com/r/datasets/comments/soej64/dataset_of_fun_or_interesting_facts/hw87iol/ | There's a bunch of interesting fact-type lists and datasets at [rlist.io](https://rlist.io) , you might wanna check out. All kinds of stuff there, suggest you do a search for what you need for your work. | 0.4019 | 4 |