Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,98 +1,95 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
features:
|
4 |
-
- name: image
|
5 |
-
dtype: image
|
6 |
-
- name: smooth-or-featured-candels_smooth
|
7 |
-
dtype: int32
|
8 |
-
- name: smooth-or-featured-candels_features
|
9 |
-
dtype: int32
|
10 |
-
- name: smooth-or-featured-candels_artifact
|
11 |
-
dtype: int32
|
12 |
-
- name: how-rounded-candels_completely
|
13 |
-
dtype: int32
|
14 |
-
- name: how-rounded-candels_in-between
|
15 |
-
dtype: int32
|
16 |
-
- name: how-rounded-candels_cigar-shaped
|
17 |
-
dtype: int32
|
18 |
-
- name: clumpy-appearance-candels_yes
|
19 |
-
dtype: int32
|
20 |
-
- name: clumpy-appearance-candels_no
|
21 |
-
dtype: int32
|
22 |
-
- name: clump-count-candels_1
|
23 |
-
dtype: int32
|
24 |
-
- name: clump-count-candels_2
|
25 |
-
dtype: int32
|
26 |
-
- name: clump-count-candels_3
|
27 |
-
dtype: int32
|
28 |
-
- name: clump-count-candels_4
|
29 |
-
dtype: int32
|
30 |
-
- name: clump-count-candels_5-plus
|
31 |
-
dtype: int32
|
32 |
-
- name: clump-count-candels_cant-tell
|
33 |
-
dtype: int32
|
34 |
-
- name: disk-edge-on-candels_yes
|
35 |
-
dtype: int32
|
36 |
-
- name: disk-edge-on-candels_no
|
37 |
-
dtype: int32
|
38 |
-
- name: edge-on-bulge-candels_yes
|
39 |
-
dtype: int32
|
40 |
-
- name: edge-on-bulge-candels_no
|
41 |
-
dtype: int32
|
42 |
-
- name: bar-candels_yes
|
43 |
-
dtype: int32
|
44 |
-
- name: bar-candels_no
|
45 |
-
dtype: int32
|
46 |
-
- name: has-spiral-arms-candels_yes
|
47 |
-
dtype: int32
|
48 |
-
- name: has-spiral-arms-candels_no
|
49 |
-
dtype: int32
|
50 |
-
- name: spiral-winding-candels_tight
|
51 |
-
dtype: int32
|
52 |
-
- name: spiral-winding-candels_medium
|
53 |
-
dtype: int32
|
54 |
-
- name: spiral-winding-candels_loose
|
55 |
-
dtype: int32
|
56 |
-
- name: spiral-arm-count-candels_1
|
57 |
-
dtype: int32
|
58 |
-
- name: spiral-arm-count-candels_2
|
59 |
-
dtype: int32
|
60 |
-
- name: spiral-arm-count-candels_3
|
61 |
-
dtype: int32
|
62 |
-
- name: spiral-arm-count-candels_4
|
63 |
-
dtype: int32
|
64 |
-
- name: spiral-arm-count-candels_5-plus
|
65 |
-
dtype: int32
|
66 |
-
- name: spiral-arm-count-candels_cant-tell
|
67 |
-
dtype: int32
|
68 |
-
- name: bulge-size-candels_none
|
69 |
-
dtype: int32
|
70 |
-
- name: bulge-size-candels_obvious
|
71 |
-
dtype: int32
|
72 |
-
- name: bulge-size-candels_dominant
|
73 |
-
dtype: int32
|
74 |
-
- name: merging-candels_merger
|
75 |
-
dtype: int32
|
76 |
-
- name: merging-candels_tidal-debris
|
77 |
-
dtype: int32
|
78 |
-
- name: merging-candels_both
|
79 |
-
dtype: int32
|
80 |
-
- name: merging-candels_neither
|
81 |
-
dtype: int32
|
82 |
-
splits:
|
83 |
-
- name: train
|
84 |
-
num_bytes: 5046191834.354
|
85 |
-
num_examples: 38478
|
86 |
-
- name: test
|
87 |
-
num_bytes: 1254244849.2
|
88 |
-
num_examples: 9620
|
89 |
-
download_size: 6262278970
|
90 |
-
dataset_size: 6300436683.554
|
91 |
-
configs:
|
92 |
-
- config_name: default
|
93 |
-
data_files:
|
94 |
-
- split: train
|
95 |
-
path: data/train-*
|
96 |
-
- split: test
|
97 |
-
path: data/test-*
|
98 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
{}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
+
# GZ Campaign Datasets
|
5 |
+
|
6 |
+
## Dataset Summary
|
7 |
+
|
8 |
+
[Galaxy Zoo](www.galaxyzoo.org) volunteers label telescope images of galaxies according to their visible features: spiral arms, galaxy-galaxy collisions, and so on.
|
9 |
+
These datasets share the galaxy images and volunteer labels in a machine-learning-friendly format.
|
10 |
+
|
11 |
+
- **Curated by:** [Mike Walmsley](https://walmsley.dev/)
|
12 |
+
- **License:** [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en). We specifically require **all models trained on these datasets to be released as source code by publication**.
|
13 |
+
|
14 |
+
## Downloading
|
15 |
+
|
16 |
+
Install the Datasets library
|
17 |
+
|
18 |
+
pip install datasets
|
19 |
+
|
20 |
+
and then log in to your HuggingFace account
|
21 |
+
|
22 |
+
huggingface-cli login
|
23 |
+
|
24 |
+
All unpublished* datasets are temporarily "gated" i.e. you must have requested and been approved for access. Galaxy Zoo team members should go to https://huggingface.co/mwalmsley, click the dataset, and "request access", then wait for approval.
|
25 |
+
Gating will be removed on publication.
|
26 |
+
|
27 |
+
*Currently: the `gz_h2o` and `gz_ukidss` datasets
|
28 |
+
|
29 |
+
## Usage
|
30 |
+
|
31 |
+
```python
|
32 |
+
from datasets import load_dataset
|
33 |
+
|
34 |
+
# . split='train' picks which split to load
|
35 |
+
dataset = load_dataset(
|
36 |
+
f'mwalmsley/gz_candels', # each dataset has a random fixed train/test split
|
37 |
+
split='train'
|
38 |
+
# some datasets also allow name=subset (e.g. name="tiny" for gz_evo). see the viewer for subset options
|
39 |
+
)
|
40 |
+
dataset.set_format('torch') # your framework of choice e.g. numpy, tensorflow, jax, etc
|
41 |
+
print(dataset_name, dataset[0]['image'].shape)
|
42 |
+
```
|
43 |
+
|
44 |
+
Then use the `dataset` object as with any other HuggingFace dataset, e.g.,
|
45 |
+
|
46 |
+
```python
|
47 |
+
from torch.utils.data import DataLoader
|
48 |
+
|
49 |
+
dataloader = DataLoader(ds, batch_size=4, num_workers=1)
|
50 |
+
for batch in dataloader:
|
51 |
+
print(batch.keys())
|
52 |
+
# the image key, plus a key counting the volunteer votes for each answer
|
53 |
+
# (e.g. smooth-or-featured-gz2_smooth)
|
54 |
+
print(batch['image'].shape)
|
55 |
+
break
|
56 |
+
```
|
57 |
+
|
58 |
+
You may find these HuggingFace docs useful:
|
59 |
+
- [PyTorch loading options](https://huggingface.co/docs/datasets/en/use_with_pytorch#data-loading).
|
60 |
+
- [Applying transforms/augmentations](https://huggingface.co/docs/datasets/en/image_process#apply-transforms).
|
61 |
+
- [Frameworks supported](https://huggingface.co/docs/datasets/v2.19.0/en/package_reference/main_classes#datasets.Dataset.set_format) by `set_format`.
|
62 |
+
|
63 |
+
|
64 |
+
## Dataset Structure
|
65 |
+
|
66 |
+
Each dataset is structured like:
|
67 |
+
|
68 |
+
```json
|
69 |
+
{
|
70 |
+
'image': ..., # image of a galaxy
|
71 |
+
'smooth-or-featured-[campaign]_smooth': 4,
|
72 |
+
'smooth-or-featured-[campaign]_featured-or-disk': 12,
|
73 |
+
... # and so on for many questions and answers
|
74 |
+
}
|
75 |
+
```
|
76 |
+
|
77 |
+
Images are loaded according to your `set_format` choice above. For example, ```set_format("torch")``` gives a (3, 424, 424) CHW `Torch.Tensor`.
|
78 |
+
|
79 |
+
The other keys are formatted like `[question]_[answer]`, where `question` is what the volunteers were asked (e.g. "smooth or featured?" and `answer` is the choice selected (e.g. "smooth"). **The values are the count of volunteers who selected each answer.**
|
80 |
+
|
81 |
+
`question` is appended with a string noting in which Galaxy Zoo campaign this question was asked e.g. `smooth-or-featured-gz2`. For most datasets, all questions were asked during the same campaign. For GZ DESI, there are three campaigns (`dr12`, `dr5`, and `dr8`) with very similar questions.
|
82 |
+
|
83 |
+
GZ Evo
|
84 |
+
|
85 |
+
(we will shortly add keys for the astronomical identifiers i.e. the sky coordinates and telescope source unique ids)
|
86 |
+
|
87 |
+
|
88 |
+
## Key Limitations
|
89 |
+
|
90 |
+
Because the volunteers are answering a decision tree, the questions asked depend on the previous answers, and so each galaxy and each question can have very different total numbers of votes. This interferes with typical metrics that use aggregated labels (e.g. classification of the most voted, regression on the mean vote fraction, etc.) because we have different levels of confidence in the aggregated labels for each galaxy. We suggest a custom loss to handle this. Please see the Datasets and Benchmarks paper for more details (under review, sorry).
|
91 |
+
|
92 |
+
|
93 |
+
All labels are imperfect. The vote counts may not always reflect the true appearance of each galaxy. Additionally,
|
94 |
+
the true appearance of each galaxy may be uncertain - even to expert astronomers.
|
95 |
+
We therefore caution against over-interpreting small changes in performance to indicate a method is "superior". **These datasets should not be used as a precise performance benchmark.**
|