gz_candels / README.md
mwalmsley's picture
Upload README.md with huggingface_hub
5da2a8e verified
|
raw
history blame
4.78 kB
metadata
annotations_creators:
  - crowdsourced
license: cc-by-nc-sa-4.0
size_categories: []
task_categories:
  - image-classification
  - image-feature-extraction
pretty_name: Galaxy Zoo CANDELS
arxiv: 2404.02973
tags:
  - galaxy zoo
  - physics
  - astronomy
  - galaxies
  - citizen science

GZ Campaign Datasets

Dataset Summary

Galaxy Zoo volunteers label telescope images of galaxies according to their visible features: spiral arms, galaxy-galaxy collisions, and so on. These datasets share the galaxy images and volunteer labels in a machine-learning-friendly format. We use these datasets to train our foundation models. We hope they'll help you too.

  • Curated by: Mike Walmsley
  • License: cc-by-nc-sa-4.0. We specifically require all models trained on these datasets to be released as source code by publication.

Downloading

Install the Datasets library

pip install datasets

and then log in to your HuggingFace account

huggingface-cli login

All unpublished* datasets are temporarily "gated" i.e. you must have requested and been approved for access. Galaxy Zoo team members should go to https://huggingface.co/mwalmsley, click the dataset, and "request access", then wait for approval. Gating will be removed on publication.

*Currently: the gz_h2o and gz_ukidss datasets

Usage

from datasets import load_dataset

# . split='train' picks which split to load
dataset = load_dataset(
    f'mwalmsley/gz_candels', # each dataset has a random fixed train/test split
    split='train'
     # some datasets also allow name=subset (e.g. name="tiny" for gz_evo). see the viewer for subset options
) 
dataset.set_format('torch')  # your framework of choice e.g. numpy, tensorflow, jax, etc
print(dataset_name, dataset[0]['image'].shape)

Then use the dataset object as with any other HuggingFace dataset, e.g.,

from torch.utils.data import DataLoader

dataloader = DataLoader(ds, batch_size=4, num_workers=1)
for batch in dataloader:
    print(batch.keys()) 
    # the image key, plus a key counting the volunteer votes for each answer 
    # (e.g. smooth-or-featured-gz2_smooth)
    print(batch['image'].shape)
    break

You may find these HuggingFace docs useful:

Dataset Structure

Each dataset is structured like:

{
  'image': ..., # image of a galaxy
  'smooth-or-featured-[campaign]_smooth': 4,
  'smooth-or-featured-[campaign]_featured-or-disk': 12,
  ...  # and so on for many questions and answers
}

Images are loaded according to your set_format choice above. For example, set_format("torch") gives a (3, 424, 424) CHW Torch.Tensor.

The other keys are formatted like [question]_[answer], where question is what the volunteers were asked (e.g. "smooth or featured?" and answer is the choice selected (e.g. "smooth"). The values are the count of volunteers who selected each answer.

question is appended with a string noting in which Galaxy Zoo campaign this question was asked e.g. smooth-or-featured-gz2. For most datasets, all questions were asked during the same campaign. For GZ DESI, there are three campaigns (dr12, dr5, and dr8) with very similar questions.

GZ Evo

(we will shortly add keys for the astronomical identifiers i.e. the sky coordinates and telescope source unique ids)

Key Limitations

Because the volunteers are answering a decision tree, the questions asked depend on the previous answers, and so each galaxy and each question can have very different total numbers of votes. This interferes with typical metrics that use aggregated labels (e.g. classification of the most voted, regression on the mean vote fraction, etc.) because we have different levels of confidence in the aggregated labels for each galaxy. We suggest a custom loss to handle this. Please see the Datasets and Benchmarks paper for more details (under review, sorry).

All labels are imperfect. The vote counts may not always reflect the true appearance of each galaxy. Additionally, the true appearance of each galaxy may be uncertain - even to expert astronomers. We therefore caution against over-interpreting small changes in performance to indicate a method is "superior". These datasets should not be used as a precise performance benchmark.