Pixiv-2.6M / README.md
KBlueLeaf's picture
Update README.md
58c4e58 verified
metadata
license: mit
task_categories:
  - image-classification
  - image-to-text
  - image-to-image
  - text-to-image
  - image-feature-extraction
language:
  - en
  - ja
tags:
  - not-for-all-audiences
size_categories:
  - 1M<n<10M

Pixiv 2.6M specific

This dataset contain 2.6M images from Pixiv.

Introduction

This dataset aims to add some concepts which is lacked or banned in danbooru.

Images Content

The images is collected with search cralwer made with selenium and python
I use some tags which may lack in danbooru for example "scenery" or some tags for "explicit contents" to do the search and collect the url of result than download all the image.
Since there are lot of posts with multiple images in different subject/topic, it is possible that this dataset looks a little bit messy.

This dataset contain totally 2,639,665 images. (Deduplicated within this set)
This dataset contain totally 747,482 pixiv posts. (Deduplicated within this set)

Format

As my other datasets. This dataset save images in tars which can be read by webdataset. (Or you can refer to HakuBooru's code)
All the images are resized to max 4M pixels and encoded with webp 90% quality (PIL). Which is 608GiB in total.
Original files(4.29TiB in total) may be uploaded in future.

Aesthetic score

There are 2 Aesthetic score provided for this dataset, based on following models:

You can utilize scores (in json format) to filter out low quality/meaningless images.
It is recommended to aggregate scores from both scorer instead of use only one.

Danbooru tags

There are tags and embeddings in same format as image (in tars) obtained from wd-tagger-v2 (swin-v2)

How to use it

You can use HakuBooru's implementation to read all the image from tars, or use chesschaser to download each image/tag/embedding manually with provided index file.