Alec commited on
Commit
f5b5065
1 Parent(s): 9972fb3

initial analysis

Browse files
Files changed (1) hide show
  1. README.md +15 -5
README.md CHANGED
@@ -1,14 +1,14 @@
1
  # gpt-2-output-dataset
2
 
3
  This dataset contains:
4
- - 250K samples from the WebText test set
5
- - For each GPT-2 model (trained on the WebText training set), 250K plain samples (temperature 1, no truncation) and 250K samples generated with top-k 40 truncation
6
 
7
  We look forward to the research produced using this data!
8
 
9
  ### Download
10
 
11
- For each, we have a training split of 500K total examples, as well as validation and test splits of 10K examples.
12
 
13
  All data is located in Google Cloud Storage, at under the directory `gs://gpt-2/output-dataset/v1`.
14
 
@@ -30,7 +30,7 @@ We've provided a script to download all of them, in `download_dataset.py`.
30
 
31
  ### Detectability baselines
32
 
33
- We're interested in seeing research in detectability of our model generations.
34
 
35
  We've provided a starter baseline which trains a logistic regression detector on TF-IDF unigram and bigram features, in `baseline.py`.
36
 
@@ -41,6 +41,16 @@ We've provided a starter baseline which trains a logistic regression detector on
41
  | 762M | 77.16% | 94.43% |
42
  | 1542M | 74.31% | 92.69% |
43
 
 
 
 
 
 
 
 
 
 
 
44
  ### Data removal requests
45
 
46
- If you believe your work is included in our dataset and would like us to remove it, please let us know at [email protected].
 
1
  # gpt-2-output-dataset
2
 
3
  This dataset contains:
4
+ - 250K documents from the WebText test set
5
+ - For each GPT-2 model (trained on the WebText training set), 250K random samples (temperature 1, no truncation) and 250K samples generated with Top-K 40 truncation
6
 
7
  We look forward to the research produced using this data!
8
 
9
  ### Download
10
 
11
+ For each model, we have a training split of 250K generated examples, as well as validation and test splits of 5K examples.
12
 
13
  All data is located in Google Cloud Storage, at under the directory `gs://gpt-2/output-dataset/v1`.
14
 
 
30
 
31
  ### Detectability baselines
32
 
33
+ We're interested in seeing research in detectability of GPT-2 model family generations.
34
 
35
  We've provided a starter baseline which trains a logistic regression detector on TF-IDF unigram and bigram features, in `baseline.py`.
36
 
 
41
  | 762M | 77.16% | 94.43% |
42
  | 1542M | 74.31% | 92.69% |
43
 
44
+ ### Initial Analysis
45
+
46
+ ![Impact of Document Length](https://i.imgur.com/PZ3GOeS.png)
47
+
48
+ Shorter documents are harder to detect. Accuracy of detection of a short documents of 500 characters (a long paragraph) is about 15% lower.
49
+
50
+ ![Part of Speech Analysis](https://i.imgur.com/eH9Ogqo.png)
51
+
52
+ Truncated sampling, which is commonly used for high-quality generations from the GPT-2 model family, results in a shift in the part of speech distribution of the generated text compared to real text. A clear example is the underuse of proper nouns and overuse of pronouns which are more generic. This shift contributes to the 8% to 18% higher detection rate of Top-K samples compared to random samples across models.
53
+
54
  ### Data removal requests
55
 
56
+ If you believe your work is included in WebText and would like us to remove it, please let us know at [email protected].