|
--- |
|
license: cc0-1.0 |
|
language: |
|
- en |
|
annotations_creators: |
|
- no-annotation |
|
size_categories: |
|
- 1M<n<10M |
|
source_datasets: |
|
- stylebreeder/stylebreeder |
|
pretty_name: stylebreeder |
|
layout: default |
|
tags: |
|
- stable diffusion |
|
- prompt engineering |
|
- prompts |
|
- research paper |
|
dataset_info: |
|
- name: style_embedding |
|
dtype: sequence |
|
sequence: |
|
dtype: float16 |
|
- name: content_embedding |
|
dtype: sequence |
|
sequence: |
|
dtype: float16 |
|
- name: image |
|
dtype: image |
|
--- |
|
|
|
|
|
This is a version of [stylebreeder/stylebreeder](https://huggingface.co/datasets/stylebreeder/stylebreeder), the 2M split. |
|
|
|
I took the 2M split, then ran the [CSD](https://huggingface.co/yuxi-liu-wired/CSD) on every image, resulting in two 768-dimensional embedding vectors for each image. I also saved the images in a much smaller resolution in JPEG format to save space. |
|
|
|
I apologize that the dataviewer doesn't work. I uploaded the dataset directly to Huggingface via the website UI and it seems their autoconversion broke something. I can't load the dataset directly because it runs out of memory locally, and I don't know how to stream a dataset from disk. |