Datasets:

Languages:
English
Tags:
Not-For-All-Audiences
License:
File size: 1,376 Bytes
9629e92
 
7d086e0
 
 
 
 
 
ab788cd
01a4030
 
94bc5dd
 
7e94dab
e5c8011
94bc5dd
bae6cc8
bef92f6
60940fb
7e94dab
 
6af78e4
60940fb
f5d707d
bef92f6
24e156f
 
90d312f
24e156f
 
2e539a5
 
 
24e156f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
license: creativeml-openrail-m
task_categories:
- text-generation
language:
- en
size_categories:
- 100K<n<1M
viewer: false
tags:
- not-for-all-audiences
---

For more info on data collection and the preprocessing algorithm, please see [Fast Anime PromptGen](https://huggingface.co/FredZhang7/anime-anything-promptgen-v2).


## 80K unique prompts
- `safebooru_clean`: Cleaned prompts with upscore ≥ 8 from the Safebooru API

---

For disclaimers about the Danbooru data, please see [Danbooru Tag Generator](https://huggingface.co/FredZhang7/danbooru-tag-generator).

## 100K unique prompts (each)
- `danbooru_raw`: Raw prompts with upscore ≥ 3 from Danbooru API
- `danbooru_clean`: Cleaned prompts with upscore ≥ 3 from Danbooru API

---

## Python

Download and save the dataset to anime_prompts.csv locally.

```bash
pip install datasets
```
```python
import csv
import datasets

dataset = datasets.load_dataset("FredZhang7/anime-prompts-180K")

train = dataset["train"]
safebooru_clean = train["safebooru_clean"]
danbooru_clean = train["danbooru_clean"]
danbooru_raw = train["danbooru_raw"]

with open("anime_prompts.csv", "w") as f:
    writer = csv.writer(f)
    writer.writerow(["safebooru_clean", "danbooru_clean", "danbooru_raw"])
    for i in range(len(safebooru_clean)):
        writer.writerow([safebooru_clean[i], danbooru_clean[i], danbooru_raw[i]])
```