--- size_categories: n<1K dataset_info: features: - name: prompt dtype: string - name: models sequence: string - name: images list: - name: path dtype: string splits: - name: train num_bytes: 1121 num_examples: 4 download_size: 3518 dataset_size: 1121 configs: - config_name: default data_files: - split: train path: data/train-* tags: - synthetic - distilabel - rlaif ---

Built with Distilabel

# Dataset Card for img-prefs-distilabel-artifacts-sample This dataset has been created with [distilabel](https://distilabel.argilla.io/). ## Dataset Summary This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI: ```console distilabel pipeline run --config "https://huggingface.co/datasets/dvilasuero/img-prefs-distilabel-artifacts-sample/raw/main/pipeline.yaml" ``` or explore the configuration: ```console distilabel pipeline info --config "https://huggingface.co/datasets/dvilasuero/img-prefs-distilabel-artifacts-sample/raw/main/pipeline.yaml" ``` ## Dataset structure The examples have the following structure per configuration:
Configuration: default
```json { "images": [ { "path": "artifacts/flux_dev/images/90b884933d23c4d57ca01dbe2898d405.jpeg" }, { "path": "artifacts/opendalle/images/90b884933d23c4d57ca01dbe2898d405.jpeg" } ], "models": [ "black-forest-labs/FLUX.1-dev", "dataautogpt3/OpenDalleV1.1" ], "prompt": "intelligence" } ``` This subset can be loaded as: ```python from datasets import load_dataset ds = load_dataset("dvilasuero/img-prefs-distilabel-artifacts-sample", "default") ``` Or simply as it follows, since there's only one configuration and is named `default`: ```python from datasets import load_dataset ds = load_dataset("dvilasuero/img-prefs-distilabel-artifacts-sample") ```
## Artifacts * **Step**: `opendalle` * **Artifact name**: `images` * `type`: image * `library`: diffusers * **Step**: `flux_dev` * **Artifact name**: `images` * `type`: image * `library`: diffusers