Jeff Wu commited on
Commit
dd14ee2
1 Parent(s): ae0b982

first commit, readme and download

Browse files
Files changed (3) hide show
  1. .gitignore +2 -0
  2. README.md +40 -1
  3. download_dataset.py +29 -0
.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ .mypy_cache/
2
+ data/
README.md CHANGED
@@ -1,2 +1,41 @@
1
  # gpt-2-output-dataset
2
- Dataset of GPT-2 outputs for research in detection, biases, and more
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # gpt-2-output-dataset
2
+
3
+ This dataset contains:
4
+ - 250K samples from the WebText test set
5
+ - For each GPT-2 model (trained on the WebText training set), 250K plain samples (temperature 1, no truncation) and 250K samples generated with top-k 40 truncation
6
+
7
+ We look forward to the research produced using this data!
8
+
9
+ ### Download
10
+
11
+ For each, we have a training split of 250K samples, as well as validation and test splits of 5K samples.
12
+
13
+ For each model, we're releasing temperature 1 samples
14
+
15
+ All data is located in Google Cloud Storage, at under the directory `gs://gpt-2/output-dataset/v1`.
16
+
17
+ There, you will find files:
18
+
19
+ - `webtext.${split}.jsonl`
20
+ - `small-117M.${split}.jsonl`
21
+ - `small-117M-k40.${split}.jsonl`
22
+ - `medium-345M.${split}.jsonl`
23
+ - `medium-345M-k40.${split}.jsonl`
24
+ - `large-762M.${split}.jsonl`
25
+ - `large-762M-k40.${split}.jsonl`
26
+ - `xl-1542M.${split}.jsonl`
27
+ - `xl-1542M-k40.${split}.jsonl`
28
+
29
+ where split is one of `train`, `test`, and `valid`.
30
+
31
+ We've provided a script to download all of them, in `download_dataset.py`.
32
+
33
+ ### Detectability baselines
34
+
35
+ We're interested in seeing research in detectability of our model generations.
36
+
37
+ We've provided a baseline of logistic regression on tf-idf, in `baseline.py`.
38
+
39
+ ### Data removal requests
40
+
41
+ If you believe your work is included in our dataset and would like us to remove it, please let us know at [email protected].
download_dataset.py ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import requests
4
+ from tqdm import tqdm
5
+
6
+ subdir = 'data'
7
+ if not os.path.exists(subdir):
8
+ os.makedirs(subdir)
9
+ subdir = subdir.replace('\\','/') # needed for Windows
10
+
11
+ for ds in [
12
+ 'webtext',
13
+ 'small-117M', 'small-117M-k40',
14
+ 'medium-345M', 'medium-345M-k40',
15
+ 'large-762M', 'large-762M-k40',
16
+ 'xl-1542M', 'xl-1542M-k40',
17
+ ]:
18
+ for split in ['train', 'valid', 'test']:
19
+ filename = ds + "." + split + '.jsonl'
20
+ r = requests.get("https://storage.googleapis.com/gpt-2/output-dataset/v1/" + filename, stream=True)
21
+
22
+ with open(os.path.join(subdir, filename), 'wb') as f:
23
+ file_size = int(r.headers["content-length"])
24
+ chunk_size = 1000
25
+ with tqdm(ncols=100, desc="Fetching " + filename, total=file_size, unit_scale=True) as pbar:
26
+ # 1k for chunk_size, since Ethernet packet size is around 1500 bytes
27
+ for chunk in r.iter_content(chunk_size=chunk_size):
28
+ f.write(chunk)
29
+ pbar.update(chunk_size)