--- license: cc-by-nc-4.0 task_categories: - object-detection tags: - Defect Detection - Anomaly Detection - Instance Segmentation pretty_name: VISION Datasets size_categories: - 1K # Dataset Card for VISION Datasets ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Dataset Information](#dataset-information) - [Datasets Overview](#datasets-overview) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Building Dataset Splits](#building-dataset-splits) - [Additional Information](#additional-information) - [License](#license) - [Disclaimer](#disclaimer) - [Citation](#citation) ## Dataset Description - **Homepage:** [VISION homepage](https://vision-based-industrial-inspection.github.io/cvpr-2023/) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** [VISION email](mailto:vision.based.inspection+datasets@gmail.com) ### Dataset Summary The **VISION Datasets** are a collection of 14 industrial inspection datasets, designed to explore the unique challenges of vision-based industrial inspection. These datasets are carefully curated from [Roboflow](https://roboflow.com) and cover a wide range of manufacturing processes, materials, and industries. To further enable precise defect segmentation, we annotate each dataset with polygon labels based on the provided bounding box labels. ### Supported Tasks and Leaderboards We currently host two prized challenges on the VISION Datasets: - The VISION [Track 1 Challenge](https://bit.ly/VISION_Track_1) aims to evaluate solutions that can effectively learn with limited labeled data in combination with unlabeled data across diverse images from different industries and contexts. - The VISION [Track 2 Challenge](https://bit.ly/VISION_Track_2) aims to challenge algorithmic solutions to generate synthetic data that will help improve model performance given only limited labeled data. Please check out our [workshop website](https://vision-based-industrial-inspection.github.io/cvpr-2023/) and competition pages for further details. ## Dataset Information ### Datasets Overview The VISION Datasets consist of the following 14 individual datasets: - Cable - Capacitor - Casting - Console - Cylinder - Electronics - Groove - Hemisphere - Lens - PCB_1 - PCB_2 - Ring - Screw - Wood ### Data Splits Each dataset contains three folders: train, val, and inference. The train and val folders contain the training and validation data, respectively. The inference folder contains both the testing data and the unused data for generating submissions to our evaluation platform. The _annotations.coco.json files contain the [COCO format](https://cocodataset.org/#format-data) annotations for each dataset. We will release more information on the testing data as the competitions conclude. Each dataset has the following structure: ```yaml ├── dataset_name/ │ ├── train/ │ │ ├── _annotations.coco.json # COCO format annotation │ │ ├── 000001.png # Images │ │ ├── 000002.png │ │ ├── ... │ ├── val/ │ │ ├── _annotations.coco.json # COCO format annotation │ │ ├── xxxxxx.png # Images │ │ ├── ... │ ├── inference/ │ │ ├── _annotations.coco.json # COCO format annotation with unlabeled image list only │ │ ├── xxxxxx.png # Images │ │ ├── ... ``` ## Dataset Creation ### Curation Rationale Our primary goal is to encourage further alignment between academic research and production practices in vision-based industrial inspection. Due to both the consideration to remain faithful to naturally existing label challenges and the difficulty in distinguishing between unintentional labeling oversight and domain-specific judgments without the manufacturers' specification sheets, we refrain from modifying original defect decisions. To enable precise defect detection even with existing label limitations, we provide refined segmentation masks for each defect indicated by the original bounding boxes. ### Building Dataset Splits To ensure the benchmark can faithfully reflect the performance of algorithms, we need to minimize leakage across train, validation, and testing data. Due to the crowd-sourced nature, the original dataset splits are not always guaranteed to be free of leakage. As a result, we design a process to resplit the datasets with specific considerations for industrial defect detection. Given distinct characteristics of defect detection datasets, including but not limited to: - Stark contrast between large image size and small defect size - Highly aligned non-defective images may seem to be duplicates, but are necessary to represent natural distribution and variation to properly assess the false detection rate. Naively deduping with image-level embedding or hash would easily drown out small defects and regard distinct non-defective images as duplicates. Therefore, we first only deduplicate images with identical byte contents and set the images without defect annotation aside. For images with defect annotations, we want to reduce leakage at the defect level. We train a self-supervised similarity model on the defect regions and model the similarity between two images as the maximum pairwise similarity between the defects on each image. Finally, we perform connected component analysis on the image similarity graph and randomly assign connected components to dataset splits in a stratified manner. In order to discourage manual exploitation during the data competition, the discarded images are provided alongside the test split data as the inference data for participants to generate their submissions. However, the testing performance is evaluated exclusively based on the test split data. Further details will be provided in a paper to be released soon. ## Additional Information ### License The provided polygon annotations are licensed under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) License. All the original dataset assets are under the original dataset licenses. ### Disclaimer While we believe the terms of the original datasets permit our use and publication herein, we do not make any representations as to the license terms of the original dataset. Please follow the license terms of such datasets if you would like to use them. ### Citation If you apply this dataset to any project and research, please cite our repo: ``` @article{vision-datasets, title = {VISION Datasets: A Benchmark for Vision-based InduStrial InspectiON}, author = {Haoping Bai, Shancong Mou, Tatiana Likhomanenko, Ramazan Gokberk Cinbis, Oncel Tuzel, Ping Huang, Jiulong Shan, Jianjun Shi, Meng Cao}, journal = {arXiv preprint arXiv:2306.07890}, year = {2023}, } ```