animal-wildlife / README.md
lucabaggi's picture
docs(readme): document extraction script usage (#7)
ae76bc4 verified
metadata
size_categories:
  - n<1K
task_categories:
  - image-classification
  - image-segmentation
dataset_info:
  features:
    - name: image
      dtype: image
    - name: label
      dtype:
        class_label:
          names:
            '0': antelope
            '1': badger
            '2': bat
            '3': bear
            '4': bee
            '5': beetle
            '6': bison
            '7': boar
            '8': butterfly
            '9': cat
            '10': caterpillar
            '11': chimpanzee
            '12': cockroach
            '13': cow
            '14': coyote
            '15': crab
            '16': crow
            '17': deer
            '18': dog
            '19': dolphin
            '20': donkey
            '21': dragonfly
            '22': duck
            '23': eagle
            '24': elephant
            '25': flamingo
            '26': fly
            '27': fox
            '28': goat
            '29': goldfish
            '30': goose
            '31': gorilla
            '32': grasshopper
            '33': hamster
            '34': hare
            '35': hedgehog
            '36': hippopotamus
            '37': hornbill
            '38': horse
            '39': hummingbird
            '40': hyena
            '41': jellyfish
            '42': kangaroo
            '43': koala
            '44': ladybugs
            '45': leopard
            '46': lion
            '47': lizard
            '48': lobster
            '49': mosquito
            '50': moth
            '51': mouse
            '52': octopus
            '53': okapi
            '54': orangutan
            '55': otter
            '56': owl
            '57': ox
            '58': oyster
            '59': panda
            '60': parrot
            '61': pelecaniformes
            '62': penguin
            '63': pig
            '64': pigeon
            '65': porcupine
            '66': possum
            '67': raccoon
            '68': rat
            '69': reindeer
            '70': rhinoceros
            '71': sandpiper
            '72': seahorse
            '73': seal
            '74': shark
            '75': sheep
            '76': snake
            '77': sparrow
            '78': squid
            '79': squirrel
            '80': starfish
            '81': swan
            '82': tiger
            '83': turkey
            '84': turtle
            '85': whale
            '86': wolf
            '87': wombat
            '88': woodpecker
            '89': zebra
  splits:
    - name: train
      num_bytes: 520059675.84
      num_examples: 4320
    - name: test
      num_bytes: 138887701.08
      num_examples: 1080
  download_size: 696270301
  dataset_size: 658947376.92
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
tags:
  - animals

Dataset Card for Dataset Name

This dataset is a port of the "Animal Image Dataset" that you can find on Kaggle. The dataset contains 60 pictures for 90 types of animals, with various image sizes.

With respect to the original dataset, I created the train-test-split partitions (80%/20%) to make it compatible via HuggingFace datasets.

Note. At the time of writing, by looking at the Croissant ML Metadata, the original license of the data is sc:CreativeWork. If you believe this dataset violates any license, please open an issue in the discussion tab, so I can take action as soon as possible.

How to use this data

from datasets import load_dataset

# for exploration
ds = load_dataset("lucabaggi/animal-wildlife", split="train")

# for training
ds = load_dataset("lucabaggi/animal-wildlife")

How the data was generated

You can find the source code for the extraction pipeline here. Note: partly generated with Claude3 and Codestral 😎😅 Please feel free to open an issue in the discussion sction if you wish to improve the code.

$ uv run --python=3.11 -- python -m extract --help

usage: extract.py [-h] [--destination-dir DESTINATION_DIR] [--split-ratio SPLIT_RATIO] [--random-seed RANDOM_SEED] [--remove-zip] zip_file

Reorganize dataset.

positional arguments:
  zip_file              Path to the zip file.

options:
  -h, --help            show this help message and exit
  --destination-dir DESTINATION_DIR
                        Path to the destination directory.
  --split-ratio SPLIT_RATIO
                        Ratio of data to be used for training.
  --random-seed RANDOM_SEED
                        Random seed for reproducibility.
  --remove-zip          Whether to remove the source zip archive file after extraction.

Example usage:

  1. Download the data from Kaggle. You can use Kaggle Python SDK, but that might require an API key if you use it locally.

  2. Invoke the script:

uv run --python=3.11 -- python -m extract -- archive.zip

This will explode the contents of the zip archive into a data directory, splitting the train and test dataset in a 80%/20% ratio.

  1. Upload to the hub:
from datasets import load_dataset

ds = load_datset("imagefolder", data_dir="data")
ds.push_to_hub()