Datasets:
Add nsfw score distribution
Browse files
README.md
CHANGED
@@ -220,7 +220,7 @@ Below are three random rows from `metadata.parquet`.
|
|
220 |
|`seed`|`uint32`| Random seed used to generate this image.|
|
221 |
|`step`|`uint16`| Step count (hyperparameter).|
|
222 |
|`cfg`|`float32`| Guidance scale (hyperparameter).|
|
223 |
-
|`sampler`|`uint8`| Sampler method (hyperparameter). Mapping: {1: "ddim", 2: "plms", 3: "k_euler", 4: "k_euler_ancestral", 5: "k_heun", 6: "k_dpm_2", 7: "k_dpm_2_ancestral", 8: "k_lms", 9: "others"}
|
224 |
|`width`|`uint16`|Image width.|
|
225 |
|`height`|`uint16`|Image height.|
|
226 |
|`user_name`|`string`|The unique discord ID's SHA256 hash of the user who generated this image. For example, the hash for `xiaohk#3146` is `e285b7ef63be99e9107cecd79b280bde602f17e0ca8363cb7a0889b67f0b5ed0`. "deleted_account" refer to users who have deleted their accounts. None means the image has been deleted before we scrape it for the second time.|
|
@@ -229,7 +229,9 @@ Below are three random rows from `metadata.parquet`.
|
|
229 |
|`prompt_nsfw`|`float32`|Likelihood of a prompt being NSFW. Scores are predicted by the library [Detoxicy](https://github.com/unitaryai/detoxify). Each score represents the maximum of `toxicity` and `sexual_explicit` (range from 0 to 1).|
|
230 |
|
231 |
> **Warning**
|
232 |
-
> Although the Stable Diffusion model has an NSFW filter that automatically blurs user-generated NSFW images, this NSFW filter is not perfect—DiffusionDB still contains some NSFW images. Therefore, we compute and provide the NSFW scores for images and prompts using the state-of-the-art models. The distribution of these scores
|
|
|
|
|
233 |
|
234 |
### Data Splits
|
235 |
|
@@ -247,8 +249,8 @@ You can use the Hugging Face [`Datasets`](https://huggingface.co/docs/datasets/q
|
|
247 |
import numpy as np
|
248 |
from datasets import load_dataset
|
249 |
|
250 |
-
# Load the dataset with the `
|
251 |
-
dataset = load_dataset('poloclub/diffusiondb', '
|
252 |
```
|
253 |
|
254 |
#### Method 2. Use the PoloClub Downloader
|
|
|
220 |
|`seed`|`uint32`| Random seed used to generate this image.|
|
221 |
|`step`|`uint16`| Step count (hyperparameter).|
|
222 |
|`cfg`|`float32`| Guidance scale (hyperparameter).|
|
223 |
+
|`sampler`|`uint8`| Sampler method (hyperparameter). Mapping: `{1: "ddim", 2: "plms", 3: "k_euler", 4: "k_euler_ancestral", 5: "k_heun", 6: "k_dpm_2", 7: "k_dpm_2_ancestral", 8: "k_lms", 9: "others"}`.
|
224 |
|`width`|`uint16`|Image width.|
|
225 |
|`height`|`uint16`|Image height.|
|
226 |
|`user_name`|`string`|The unique discord ID's SHA256 hash of the user who generated this image. For example, the hash for `xiaohk#3146` is `e285b7ef63be99e9107cecd79b280bde602f17e0ca8363cb7a0889b67f0b5ed0`. "deleted_account" refer to users who have deleted their accounts. None means the image has been deleted before we scrape it for the second time.|
|
|
|
229 |
|`prompt_nsfw`|`float32`|Likelihood of a prompt being NSFW. Scores are predicted by the library [Detoxicy](https://github.com/unitaryai/detoxify). Each score represents the maximum of `toxicity` and `sexual_explicit` (range from 0 to 1).|
|
230 |
|
231 |
> **Warning**
|
232 |
+
> Although the Stable Diffusion model has an NSFW filter that automatically blurs user-generated NSFW images, this NSFW filter is not perfect—DiffusionDB still contains some NSFW images. Therefore, we compute and provide the NSFW scores for images and prompts using the state-of-the-art models. The distribution of these scores is shown below. Please decide an appropriate NSFW score threshold to filter out NSFW images before using DiffusionDB in your projects.
|
233 |
+
|
234 |
+
<img src="https://i.imgur.com/1RiGAXL.png" width="100%">
|
235 |
|
236 |
### Data Splits
|
237 |
|
|
|
249 |
import numpy as np
|
250 |
from datasets import load_dataset
|
251 |
|
252 |
+
# Load the dataset with the `large_random_1k` subset
|
253 |
+
dataset = load_dataset('poloclub/diffusiondb', 'large_random_1k')
|
254 |
```
|
255 |
|
256 |
#### Method 2. Use the PoloClub Downloader
|