Datasets:
Tasks:
Image Classification
Modalities:
Image
Languages:
English
Size:
10K<n<100K
Libraries:
FiftyOne
File size: 4,415 Bytes
a4efc35 5ba5b5a a4efc35 5ba5b5a a4efc35 5ba5b5a a4efc35 5ba5b5a a4efc35 5ba5b5a a4efc35 5ba5b5a a4efc35 5ba5b5a a4efc35 5ba5b5a a4efc35 5ba5b5a a4efc35 5ba5b5a a4efc35 5ba5b5a a4efc35 5ba5b5a a4efc35 5ba5b5a a4efc35 5ba5b5a a4efc35 5ba5b5a a4efc35 5ba5b5a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 |
---
annotations_creators: []
language: en
size_categories:
- 10K<n<100K
task_categories:
- image-classification
task_ids: []
pretty_name: StanfordDogsImbalanced
tags:
- fiftyone
- image
- image-classification
dataset_summary: '
![image/png](dataset_preview.jpg)
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 19060 samples.
## Installation
If you haven''t already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh
# Load the dataset
# Note: other available arguments include ''max_samples'', etc
dataset = fouh.load_from_hub("Voxel51/Stanford-Dogs-Imbalanced")
# Launch the App
session = fo.launch_app(dataset)
```
'
---
# Dataset Card for StanfordDogsImbalanced
<!-- Provide a quick summary of the dataset. -->
![image/png](dataset_preview.jpg)
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 19060 samples.
## Installation
If you haven't already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh
# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = fouh.load_from_hub("Voxel51/Stanford-Dogs-Imbalanced")
# Launch the App
session = fo.launch_app(dataset)
```
## Dataset Details
### Dataset Description
An imbalanced version of the [Stanford Dogs dataset](http://vision.stanford.edu/aditya86/ImageNetDogs/) designed for testing class imbalance mitigation techniques, including but not limited to synthetic data generation.
This version of the dataset was constructed by randomly splitting the original dataset into train, val, and test sets with a 60/20/20 split. For 15 randomly chosen classes, we then removed all but 10 of the training examples.
```python
# Split the dataset into train, val, and test sets
import fiftyone.utils.random as four
train, val, test = four.random_split(dataset, split_fracs=(0.6, 0.2, 0.2))
splits_dict = { "train": train, "val": val, "test": test }
# Get the classes to limit
import random
classes = list(dataset.distinct("ground_truth.label"))
classes_to_limit = random.sample(classes, 15)
# Limit the number of samples for the selected classes
for class_name in classes_to_limit:
class_samples = dataset.match(F("ground_truth.label") == class_name)
samples_to_keep = class_samples.take(10)
samples_to_remove = class_samples.exclude(samples_to_keep)
dataset.delete_samples(samples_to_remove)
```
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Paper:** [More Information Needed]
- **Homepage:** [More Information Needed]
## Uses
- Fine-grained visual classification
- Class imbalance mitigation strategies
<!-- Address questions around how the dataset is intended to be used. -->
## Dataset Structure
The following classes only have 10 samples in the train split:
- Australian_terrier
- Saluki
- Cardigan
- standard_schnauzer
- Eskimo_dog
- American_Staffordshire_terrier
- Lakeland_terrier
- Lhasa
- cocker_spaniel
- Greater_Swiss_Mountain_dog
- basenji
- toy_terrier
- Chihuahua
- Walker_hound
- Shih-Tzu
- Newfoundland
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@inproceedings{KhoslaYaoJayadevaprakashFeiFei_FGVC2011,
author = "Aditya Khosla and Nityananda Jayadevaprakash and Bangpeng Yao and Li Fei-Fei",
title = "Novel Dataset for Fine-Grained Image Categorization",
booktitle = "First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition",
2011,
month = "June",
address = "Colorado Springs, CO",
}
```
## Dataset Card Author
[Jacob Marks](https://huggingface.co/jamarks)
## Dataset Contacts
[email protected] and [email protected] |