Datasets:

Modalities:
Text
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
shunk031 commited on
Commit
6324c60
1 Parent(s): fc68220

Initialize (#1)

Browse files

* add files

* add ci.yaml

* update

* add `push_to_hub.yaml`

* update README

.github/workflows/ci.yaml ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: CI
2
+
3
+ on:
4
+ push:
5
+ branches: [main]
6
+ pull_request:
7
+ branches: [main]
8
+ paths-ignore:
9
+ - 'README.md'
10
+
11
+ jobs:
12
+ test:
13
+ runs-on: ubuntu-latest
14
+ strategy:
15
+ matrix:
16
+ python-version: ['3.9', '3.10']
17
+
18
+ steps:
19
+ - name: Checkout
20
+ uses: actions/checkout@v4
21
+
22
+ - name: Setup Python ${{ matrix.python-version }}
23
+ uses: actions/setup-python@v4
24
+ with:
25
+ python-version: ${{ matrix.python-version }}
26
+
27
+ - name: Install dependencies
28
+ run: |
29
+ pip install -U pip setuptools wheel poetry
30
+ poetry install
31
+
32
+ - name: Format
33
+ run: |
34
+ poetry run black --check .
35
+
36
+ - name: Lint
37
+ run: |
38
+ poetry run ruff check .
39
+
40
+ - name: Type check
41
+ run: |
42
+ poetry run mypy . \
43
+ --ignore-missing-imports \
44
+ --no-strict-optional \
45
+ --no-site-packages \
46
+ --cache-dir=/dev/null
47
+
48
+ - name: Run tests
49
+ run: |
50
+ poetry run pytest --color=yes -rf
.github/workflows/push_to_hub.yaml ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Sync to Hugging Face Hub
2
+
3
+ on:
4
+ workflow_run:
5
+ workflows:
6
+ - CI
7
+ branches:
8
+ - main
9
+ types:
10
+ - completed
11
+
12
+ jobs:
13
+ push_to_hub:
14
+ runs-on: ubuntu-latest
15
+
16
+ steps:
17
+ - name: Checkout repository
18
+ uses: actions/checkout@v4
19
+
20
+ - name: Push to Huggingface hub
21
+ env:
22
+ HF_TOKEN: ${{ secrets.HF_TOKEN }}
23
+ HF_USERNAME: ${{ secrets.HF_USERNAME }}
24
+ run: |
25
+ git fetch --unshallow
26
+ git push --force https://${HF_USERNAME}:${HF_TOKEN}@huggingface.co/datasets/${HF_USERNAME}/DrawBench main
.gitignore ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Created by https://www.toptal.com/developers/gitignore/api/python
2
+ # Edit at https://www.toptal.com/developers/gitignore?templates=python
3
+
4
+ ### Python ###
5
+ # Byte-compiled / optimized / DLL files
6
+ __pycache__/
7
+ *.py[cod]
8
+ *$py.class
9
+
10
+ # C extensions
11
+ *.so
12
+
13
+ # Distribution / packaging
14
+ .Python
15
+ build/
16
+ develop-eggs/
17
+ dist/
18
+ downloads/
19
+ eggs/
20
+ .eggs/
21
+ lib/
22
+ lib64/
23
+ parts/
24
+ sdist/
25
+ var/
26
+ wheels/
27
+ share/python-wheels/
28
+ *.egg-info/
29
+ .installed.cfg
30
+ *.egg
31
+ MANIFEST
32
+
33
+ # PyInstaller
34
+ # Usually these files are written by a python script from a template
35
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
36
+ *.manifest
37
+ *.spec
38
+
39
+ # Installer logs
40
+ pip-log.txt
41
+ pip-delete-this-directory.txt
42
+
43
+ # Unit test / coverage reports
44
+ htmlcov/
45
+ .tox/
46
+ .nox/
47
+ .coverage
48
+ .coverage.*
49
+ .cache
50
+ nosetests.xml
51
+ coverage.xml
52
+ *.cover
53
+ *.py,cover
54
+ .hypothesis/
55
+ .pytest_cache/
56
+ cover/
57
+
58
+ # Translations
59
+ *.mo
60
+ *.pot
61
+
62
+ # Django stuff:
63
+ *.log
64
+ local_settings.py
65
+ db.sqlite3
66
+ db.sqlite3-journal
67
+
68
+ # Flask stuff:
69
+ instance/
70
+ .webassets-cache
71
+
72
+ # Scrapy stuff:
73
+ .scrapy
74
+
75
+ # Sphinx documentation
76
+ docs/_build/
77
+
78
+ # PyBuilder
79
+ .pybuilder/
80
+ target/
81
+
82
+ # Jupyter Notebook
83
+ .ipynb_checkpoints
84
+
85
+ # IPython
86
+ profile_default/
87
+ ipython_config.py
88
+
89
+ # pyenv
90
+ # For a library or package, you might want to ignore these files since the code is
91
+ # intended to run in multiple environments; otherwise, check them in:
92
+ .python-version
93
+
94
+ # pipenv
95
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
96
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
97
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
98
+ # install all needed dependencies.
99
+ #Pipfile.lock
100
+
101
+ # poetry
102
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
103
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
104
+ # commonly ignored for libraries.
105
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
106
+ #poetry.lock
107
+
108
+ # pdm
109
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
110
+ #pdm.lock
111
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
112
+ # in version control.
113
+ # https://pdm.fming.dev/#use-with-ide
114
+ .pdm.toml
115
+
116
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
117
+ __pypackages__/
118
+
119
+ # Celery stuff
120
+ celerybeat-schedule
121
+ celerybeat.pid
122
+
123
+ # SageMath parsed files
124
+ *.sage.py
125
+
126
+ # Environments
127
+ .env
128
+ .venv
129
+ env/
130
+ venv/
131
+ ENV/
132
+ env.bak/
133
+ venv.bak/
134
+
135
+ # Spyder project settings
136
+ .spyderproject
137
+ .spyproject
138
+
139
+ # Rope project settings
140
+ .ropeproject
141
+
142
+ # mkdocs documentation
143
+ /site
144
+
145
+ # mypy
146
+ .mypy_cache/
147
+ .dmypy.json
148
+ dmypy.json
149
+
150
+ # Pyre type checker
151
+ .pyre/
152
+
153
+ # pytype static type analyzer
154
+ .pytype/
155
+
156
+ # Cython debug symbols
157
+ cython_debug/
158
+
159
+ # PyCharm
160
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
161
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
162
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
163
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
164
+ #.idea/
165
+
166
+ ### Python Patch ###
167
+ # Poetry local configuration file - https://python-poetry.org/docs/configuration/#local-configuration
168
+ poetry.toml
169
+
170
+ # ruff
171
+ .ruff_cache/
172
+
173
+ # LSP config files
174
+ pyrightconfig.json
175
+
176
+ # End of https://www.toptal.com/developers/gitignore/api/python
DrawBench.py ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import datasets as ds
2
+ import pandas as pd
3
+
4
+ _CITATION = """\
5
+ @article{saharia2022photorealistic,
6
+ title={Photorealistic text-to-image diffusion models with deep language understanding},
7
+ author={Saharia, Chitwan and Chan, William and Saxena, Saurabh and Li, Lala and Whang, Jay and Denton, Emily L and Ghasemipour, Kamyar and Gontijo Lopes, Raphael and Karagol Ayan, Burcu and Salimans, Tim and others},
8
+ journal={Advances in Neural Information Processing Systems},
9
+ volume={35},
10
+ pages={36479--36494},
11
+ year={2022}
12
+ }
13
+ """
14
+
15
+ _SHORT_DESCRIPTION = "DrawBench is a new comprehensive and challenging evaluation benchmark for the text-to-image task."
16
+
17
+ _DESCRIPTION = """\
18
+ DrawBench is a comprehensive and challenging set of prompts that support the evaluation and comparison of text-to-image models. This benchmark contains 11 categories of prompts, testing different capabilities of models such as the ability to faithfully render different colors, numbers of objects, spatial relations, text in the scene, and unusual interactions between objects.\
19
+ """
20
+
21
+ _HOMEPAGE = "https://imagen.research.google/"
22
+
23
+ _URL = "https://docs.google.com/spreadsheets/d/1y7nAbmR4FREi6npB1u-Bo3GFdwdOPYJc617rBOxIRHY/gviz/tq?tqx=out:csv"
24
+ _CATEGORIES = [
25
+ "Colors",
26
+ "Conflicting",
27
+ "Counting",
28
+ "DALL-E",
29
+ "Descriptions",
30
+ "Gary Marcus et al.",
31
+ "Misspellings",
32
+ "Positional",
33
+ "Rare Words",
34
+ "Reddit",
35
+ "Text",
36
+ ]
37
+
38
+
39
+ class DrawBench(ds.GeneratorBasedBuilder):
40
+ VERSION = ds.Version("1.0.0")
41
+ BUILDER_CONFIGS = [
42
+ ds.BuilderConfig(
43
+ version=VERSION,
44
+ description=_SHORT_DESCRIPTION,
45
+ ),
46
+ ]
47
+
48
+ def _info(self):
49
+ features = ds.Features(
50
+ {
51
+ "prompts": ds.Value("string"),
52
+ "category": ds.ClassLabel(num_classes=11, names=_CATEGORIES),
53
+ }
54
+ )
55
+ return ds.DatasetInfo(
56
+ description=_DESCRIPTION,
57
+ features=features,
58
+ homepage=_HOMEPAGE,
59
+ citation=_CITATION,
60
+ )
61
+
62
+ def _split_generators(self, dl_manager: ds.DownloadManager):
63
+ file_path = dl_manager.download(_URL)
64
+ return [
65
+ ds.SplitGenerator(
66
+ name=ds.Split.TEST,
67
+ gen_kwargs={"file_path": file_path},
68
+ )
69
+ ]
70
+
71
+ def _generate_examples(self, file_path: str):
72
+ df = pd.read_csv(file_path)
73
+ df.columns = df.columns.str.lower()
74
+
75
+ for i, example in enumerate(df.to_dict(orient="records")):
76
+ yield i, example
README.md ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language:
5
+ - en
6
+ language_creators: []
7
+ license:
8
+ - unknown
9
+ multilinguality:
10
+ - monolingual
11
+ pretty_name: DrawBench
12
+ size_categories:
13
+ - n<1K
14
+ source_datasets:
15
+ - original
16
+ tags: []
17
+ task_categories:
18
+ - text-to-image
19
+ task_ids: []
20
+ ---
21
+
22
+ # Dataset Card for DrawBench
23
+
24
+ ## Table of Contents
25
+ - [Dataset Card Creation Guide](#dataset-card-creation-guide)
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
39
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
40
+ - [Annotations](#annotations)
41
+ - [Annotation process](#annotation-process)
42
+ - [Who are the annotators?](#who-are-the-annotators)
43
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
44
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
45
+ - [Social Impact of Dataset](#social-impact-of-dataset)
46
+ - [Discussion of Biases](#discussion-of-biases)
47
+ - [Other Known Limitations](#other-known-limitations)
48
+ - [Additional Information](#additional-information)
49
+ - [Dataset Curators](#dataset-curators)
50
+ - [Licensing Information](#licensing-information)
51
+ - [Citation Information](#citation-information)
52
+ - [Contributions](#contributions)
53
+
54
+ ## Dataset Description
55
+
56
+ - **Homepage:** https://imagen.research.google/
57
+ - **Repository:** https://github.com/shunk031/huggingface-datasets_DrawBench
58
+ - **Paper:** https://arxiv.org/abs/2205.11487
59
+
60
+ ### Dataset Summary
61
+
62
+ [More Information Needed]
63
+
64
+ ### Supported Tasks and Leaderboards
65
+
66
+ [More Information Needed]
67
+
68
+ ### Languages
69
+
70
+ The language data in DrawBench is in English (BCP-47 en-US).
71
+
72
+ ## Dataset Structure
73
+
74
+ ### Data Instances
75
+
76
+ [More Information Needed]
77
+
78
+ ### Data Fields
79
+
80
+ [More Information Needed]
81
+
82
+ ### Data Splits
83
+
84
+ [More Information Needed]
85
+
86
+ ## Dataset Creation
87
+
88
+ ### Curation Rationale
89
+
90
+ [More Information Needed]
91
+
92
+ ### Source Data
93
+
94
+ [More Information Needed]
95
+
96
+ #### Initial Data Collection and Normalization
97
+
98
+ [More Information Needed]
99
+
100
+ #### Who are the source language producers?
101
+
102
+ [More Information Needed]
103
+
104
+ ### Annotations
105
+
106
+ [More Information Needed]
107
+
108
+ #### Annotation process
109
+
110
+ [More Information Needed]
111
+
112
+ #### Who are the annotators?
113
+
114
+ [More Information Needed]
115
+
116
+ ### Personal and Sensitive Information
117
+
118
+ [More Information Needed]
119
+
120
+ ## Considerations for Using the Data
121
+
122
+ ### Social Impact of Dataset
123
+
124
+ [More Information Needed]
125
+
126
+ ### Discussion of Biases
127
+
128
+ [More Information Needed]
129
+
130
+ ### Other Known Limitations
131
+
132
+ [More Information Needed]
133
+
134
+ ## Additional Information
135
+
136
+ ### Dataset Curators
137
+
138
+ [More Information Needed]
139
+
140
+ ### Licensing Information
141
+
142
+ [More Information Needed]
143
+
144
+ ### Citation Information
145
+
146
+ ```bibtex
147
+ @article{saharia2022photorealistic,
148
+ title={Photorealistic text-to-image diffusion models with deep language understanding},
149
+ author={Saharia, Chitwan and Chan, William and Saxena, Saurabh and Li, Lala and Whang, Jay and Denton, Emily L and Ghasemipour, Kamyar and Gontijo Lopes, Raphael and Karagol Ayan, Burcu and Salimans, Tim and others},
150
+ journal={Advances in Neural Information Processing Systems},
151
+ volume={35},
152
+ pages={36479--36494},
153
+ year={2022}
154
+ }
155
+ ```
156
+
157
+ ### Contributions
158
+
159
+ Thanks to Google Research, Brain Team for creating this dataset.
poetry.lock ADDED
The diff for this file is too large to render. See raw diff
 
pyproject.toml ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [tool.poetry]
2
+ name = "huggingface-datasets-drawbench"
3
+ version = "0.1.0"
4
+ description = ""
5
+ authors = ["Shunsuke KITADA <[email protected]>"]
6
+ readme = "README.md"
7
+
8
+ [tool.poetry.dependencies]
9
+ python = "^3.9"
10
+ datasets = "^2.14.5"
11
+
12
+ [tool.poetry.group.dev.dependencies]
13
+ ruff = "^0.0.291"
14
+ black = "^23.9.1"
15
+ mypy = "^1.5.1"
16
+ pytest = "^7.4.2"
17
+
18
+ [tool.ruff]
19
+ target-version = "py38"
20
+ ignore = [
21
+ "E501", # line too long, handled by black
22
+ ]
23
+
24
+ [build-system]
25
+ requires = ["poetry-core"]
26
+ build-backend = "poetry.core.masonry.api"
tests/DrawBench_test.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import datasets as ds
2
+ import pytest
3
+
4
+
5
+ @pytest.fixture
6
+ def dataset_path() -> str:
7
+ return "DrawBench.py"
8
+
9
+
10
+ def test_load_dataset(dataset_path: str, expected_num_test: int = 200):
11
+ dataset = ds.load_dataset(path=dataset_path)
12
+ assert dataset["test"].num_rows == expected_num_test
tests/__init__.py ADDED
File without changes