Marissa commited on
Commit
3deddb6
1 Parent(s): 737e343

Limit use of collapsible sections; fix emissions info

Browse files
Files changed (1) hide show
  1. README.md +5 -54
README.md CHANGED
@@ -20,6 +20,8 @@ model-index:
20
  - type: perplexity
21
  name: Perplexity
22
  value: 21.1
 
 
23
  ---
24
 
25
  # DistilGPT2
@@ -28,9 +30,6 @@ DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained w
28
 
29
  ## Model Details
30
 
31
- <details>
32
- <summary>Click to expand</summary>
33
-
34
  - **Developed by:** Hugging Face
35
  - **Model type:** Transformer-based Language Model
36
  - **Language:** English
@@ -38,13 +37,8 @@ DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained w
38
  - **Model Description:** DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using [knowledge distillation](#knowledge-distillation) and was designed to be a faster, lighter version of GPT-2.
39
  - **Resources for more information:** See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including Distilled-GPT2), [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure, and this page for more about [GPT-2](https://openai.com/blog/better-language-models/).
40
 
41
- </details>
42
-
43
  ## Uses, Limitations and Risks
44
 
45
- <details>
46
- <summary>Click to expand</summary>
47
-
48
  #### Limitations and Risks
49
 
50
  <details>
@@ -128,9 +122,6 @@ output = model(encoded_input)
128
 
129
  #### Potential Uses
130
 
131
- <details>
132
- <summary>Click to expand</summary>
133
-
134
  Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
135
 
136
  The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
@@ -141,71 +132,38 @@ The developers of GPT-2 state in their [model card](https://github.com/openai/gp
141
 
142
  Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser.
143
 
144
- </details>
145
-
146
  #### Out-of-scope Uses
147
 
148
- <details>
149
- <summary>Click to expand</summary>
150
-
151
  OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md):
152
 
153
  > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
154
  >
155
  > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
156
 
157
- </details>
158
-
159
- </details>
160
-
161
  ## Training Data
162
 
163
- <details>
164
- <summary>Click to expand</summary>
165
-
166
  DistilGPT2 was trained using [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the [OpenWebTextCorpus Dataset Card](https://huggingface.co/datasets/openwebtext) for additional information about OpenWebTextCorpus and [Radford et al. (2019)](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) for additional information about WebText.
167
 
168
- </details>
169
-
170
  ## Training Procedure
171
 
172
- <details>
173
- <summary>Click to expand</summary>
174
-
175
  The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108).
176
 
177
- </details>
178
-
179
  ## Evaluation Results
180
 
181
- <details>
182
- <summary>Click to expand</summary>
183
-
184
  The creators of DistilGPT2 [report](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) that, on the [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).
185
 
186
- </details>
187
-
188
  ## Carbon Emissions
189
 
190
- <details>
191
- <summary>Click to expand</summary>
192
-
193
  *Emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
194
 
195
- - **Hardware Type:** 16GB V100
196
- - **Hours used:** 8
197
  - **Cloud Provider:** Azure
198
  - **Compute Region:** unavailable, assumed East US for calculations
199
- - **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: .89 kg eq. CO2
200
- - **Carbon already offset by cloud provider:** .89 kg eq. CO2
201
-
202
- </details>
203
 
204
  ## Citation
205
 
206
- <details>
207
- <summary>Click to expand</summary>
208
-
209
  ```bibtex
210
  @inproceedings{sanh2019distilbert,
211
  title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
@@ -215,17 +173,10 @@ The creators of DistilGPT2 [report](https://github.com/huggingface/transformers/
215
  }
216
  ```
217
 
218
- </details>
219
-
220
  ## Glossary
221
 
222
- <details>
223
- <summary>Click to expand</summary>
224
-
225
  - <a name="knowledge-distillation">**Knowledge Distillation**</a>: As described in [Sanh et al. (2019)](https://arxiv.org/pdf/1910.01108.pdf), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see [Bucila et al. (2006)](https://www.cs.cornell.edu/~caruana/compression.kdd06.pdf) and [Hinton et al. (2015)](https://arxiv.org/abs/1503.02531).
226
 
227
- </details>
228
-
229
  <a href="https://huggingface.co/exbert/?model=distilgpt2">
230
  <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
231
  </a>
 
20
  - type: perplexity
21
  name: Perplexity
22
  value: 21.1
23
+
24
+ co2_eq_emissions: 149.2 kg
25
  ---
26
 
27
  # DistilGPT2
 
30
 
31
  ## Model Details
32
 
 
 
 
33
  - **Developed by:** Hugging Face
34
  - **Model type:** Transformer-based Language Model
35
  - **Language:** English
 
37
  - **Model Description:** DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using [knowledge distillation](#knowledge-distillation) and was designed to be a faster, lighter version of GPT-2.
38
  - **Resources for more information:** See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including Distilled-GPT2), [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure, and this page for more about [GPT-2](https://openai.com/blog/better-language-models/).
39
 
 
 
40
  ## Uses, Limitations and Risks
41
 
 
 
 
42
  #### Limitations and Risks
43
 
44
  <details>
 
122
 
123
  #### Potential Uses
124
 
 
 
 
125
  Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
126
 
127
  The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
 
132
 
133
  Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser.
134
 
 
 
135
  #### Out-of-scope Uses
136
 
 
 
 
137
  OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md):
138
 
139
  > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
140
  >
141
  > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
142
 
 
 
 
 
143
  ## Training Data
144
 
 
 
 
145
  DistilGPT2 was trained using [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the [OpenWebTextCorpus Dataset Card](https://huggingface.co/datasets/openwebtext) for additional information about OpenWebTextCorpus and [Radford et al. (2019)](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) for additional information about WebText.
146
 
 
 
147
  ## Training Procedure
148
 
 
 
 
149
  The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108).
150
 
 
 
151
  ## Evaluation Results
152
 
 
 
 
153
  The creators of DistilGPT2 [report](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) that, on the [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).
154
 
 
 
155
  ## Carbon Emissions
156
 
 
 
 
157
  *Emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
158
 
159
+ - **Hardware Type:** 8 16GB V100
160
+ - **Hours used:** 168 (1 week)
161
  - **Cloud Provider:** Azure
162
  - **Compute Region:** unavailable, assumed East US for calculations
163
+ - **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2
 
 
 
164
 
165
  ## Citation
166
 
 
 
 
167
  ```bibtex
168
  @inproceedings{sanh2019distilbert,
169
  title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
 
173
  }
174
  ```
175
 
 
 
176
  ## Glossary
177
 
 
 
 
178
  - <a name="knowledge-distillation">**Knowledge Distillation**</a>: As described in [Sanh et al. (2019)](https://arxiv.org/pdf/1910.01108.pdf), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see [Bucila et al. (2006)](https://www.cs.cornell.edu/~caruana/compression.kdd06.pdf) and [Hinton et al. (2015)](https://arxiv.org/abs/1503.02531).
179
 
 
 
180
  <a href="https://huggingface.co/exbert/?model=distilgpt2">
181
  <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
182
  </a>