license: cc-by-sa-4.0
task_categories:
- reinforcement-learning
- robotics
language:
- en
annotations_creators:
- experts-generated
tags:
- self-driving
- robotics navigation
pretty_name: FrodoBots 2K Dataset
Dataset Description
- Homepage: https://www.frodobots.ai/
- Hours of tele-operation: ~2,000 Hrs
- Dataset Size: 1TB
- Point of Contact: [email protected]
FrodoBots 2K Dataset
The FrodoBots 2K Dataset is a diverse collection of camera footage, GPS, IMU, audio recordings & human control data collected from ~2,000 hours of tele-operated sidewalk robots driving in 10+ cities.
This dataset is collected from Earth Rovers, a global scavenger hunt "Drive to Earn" game developed by FrodoBots Lab.
Please join our Discord for discussions with fellow researchers/makers!
If you're interested in contributing driving data, you can buy your own unit(s) from our online shop (US$299 per unit) and start driving around your neighborhood (& earn in-game points in the process)!
If you're interested in testing out your AI models on our existing fleet of Earth Rovers in various cities or your own Earth Rover, feel free to DM Michael Cho on Twitter/X to gain access to our Remote Access SDK.
If you're interested in playing the game (ie. remotely driving an Earth Rover), you may join as a gamer at Earth Rovers School.
Dataset Summary
There are 7 types of data that are associated with a typical Earth Rovers drive, as follows:
Control data: Gamer's control inputs captured at a frequency of 10Hz (Ideal) as well as the RPM (revolutions per minute) readings for each of the 4 wheels on the robot.
GPS data: Latitude, longitude, and timestamp info collected during the robot drives at a frequency of 1Hz.
IMU (Inertial Measurement Unit) data: 9-DOF sensor data, including acceleration (captured at 100Hz), gyroscope (captured at 1Hz), and magnetometer info (captured at 1Hz), along with timestamp data.
Rear camera video: Video footage captured by the robot's rear-facing camera at a typical frame rate of 20 FPS with a resolution of 540x360.
Front camera video: Video footage captured by the robot's front-facing camera at a typical frame rate of 20 FPS with a resolution of 1024x576.
Microphone: Audio recordings captured by the robot's microphone, with a sample rate of 16000Hz, channel 1.
Speaker: Audio recordings of the robot's speaker output (ie. gamer's microphone), also with a sample rate of 16000Hz, channel 1.
Note: As of 12 May 2024, ~1,300 hrs are ready for download. The remaining ~700 hours are still undergoing data cleaning and will be available for download by end May or early June.
Video Walkthrough
Our cofounder, Michael Cho, walks through the core components of the dataset, as well as a discussion on latency issues surrounding the data collection.
In total, there were 9,000+ individual driving sessions recorded. The chart below shows the distribution of individual driving session duration.
These drives were done with Earth Rovers in 10+ cities. The chart below shows the distribution of recorded driving duration in the various cities.
About FrodoBots
FrodoBots is a project aiming to crowdsource the world's largest real-world teleoperation datasets with robotic gaming.
We have 3 core thesis:
- Robotic gaming can be a thing: It is possible to create fun gaming experience where gamers control robots remotely to complete missions in real life.
- Affordable robots are just as useful in collecting data for Embodied AI research: We design our robots to be like "toys", so that as many people as possible can afford to buy one and play with them.
- DePIN can scale this project: We can create a global community of robot hardware owners/operators by incentivizing them with well-designed tokenomics, taking best practices from other DePIN (Decentralized Physical Infrastructure Network) projects.
Motivations for open-sourcing the dataset
The team behind FrodoBots is focused on building an real-world video gaming experience using real-life robots (we call it "robotic gaming"). A by-product of gamers playing the game is the accompanying dataset that's generated.
By sharing this dataset with the research community, we hope to see new innovations that can (1) take advantage of this dataset & (2) leverage our existing fleet of community-sourced robots (via our Remote Access SDK) as a platform for testing SOTA Embodied AI models in the real world.
Help needed!
We are a very small team with little experience in various downstream data pipeline and AI research skillsets. One thing we do have is lots of real-world data.
Please reach out or join our Discord if you'd have any feedback or like to contribute to our efforts, especially on following:
- Data cleaning: We have way more data than what we've open-sourced in this dataset, primarily because we struggle with variuos data cleaning tasks.
- Data analytics: We have done a couple charts but that's about it.
- Data annotations: We have open-sourced the raw files, but it'll be great to work with teams with data annotation know-how to further augment the current dataset.
- Data visualization: A lot more can be done to visualize some of these raw inputs (eg. layering timestamped data on top of the video footage).
- Data anonymization: We'd like to build in various data anonymization (eg. face blurring) in future releases. We attempted to do this but struggled with downstream data manipulation issues (eg. dropped frames, lower video resolution, etc)
- Data streaming & hosting: If this project continues to scale, we'd have millions of hours of such data in the future. Will need help with storage/streaming.
Download
Download FrodoBots dataset using the link in this csv file.
Helper code
We've provided a helpercode.ipynb file that will hopefully serve as a quick-start for researchers to play around with the dataset.
Contributions
The team at FrodoBots Lab created this dataset, including Michael Cho, Sam Cho, Aaron Tung, Niresh Dravin & Santiago Pravisani.