Datasets:

Languages:
English
ArXiv:
License:
system HF staff commited on
Commit
71e0970
1 Parent(s): 0f1bdf9

Update files from the datasets library (from 1.5.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.5.0

Files changed (1) hide show
  1. README.md +171 -36
README.md CHANGED
@@ -129,59 +129,131 @@ task_ids:
129
 
130
  ## Dataset Description
131
 
132
- - **Homepage:** https://parl.ai/projects/md_gender/
133
- - **Repository:** [Needs More Information]
134
- - **Paper:** https://arxiv.org/abs/2005.00614
135
  - **Leaderboard:** [Needs More Information]
136
  - **Point of Contact:** [email protected]
137
 
138
  ### Dataset Summary
139
 
140
- Machine learning models are trained to find patterns in data.
141
- NLP models can inadvertently learn socially undesirable patterns when training on gender biased text.
142
- In this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions:
143
- bias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker.
144
- Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
145
- In addition, we collect a novel, crowdsourced evaluation benchmark of utterance-level gender rewrites.
146
- Distinguishing between gender bias along multiple dimensions is important, as it enables us to train finer-grained gender bias classifiers.
147
- We show our classifiers prove valuable for a variety of important applications, such as controlling for gender bias in generative models,
148
- detecting gender bias in arbitrary text, and shed light on offensive language in terms of genderedness.
149
 
150
  ### Supported Tasks and Leaderboards
151
 
152
- [Needs More Information]
153
 
154
  ### Languages
155
 
156
- The data is in English (`en`)
157
 
158
  ## Dataset Structure
159
 
160
  ### Data Instances
161
 
162
- [Needs More Information]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
163
 
164
  ### Data Fields
165
 
166
- The data has the following features.
167
 
168
  For the `new_data` config:
169
  - `text`: the text to be classified
170
  - `original`: the text before reformulation
171
  - `labels`: a `list` of classification labels, with possible values including `ABOUT:female`, `ABOUT:male`, `PARTNER:female`, `PARTNER:male`, `SELF:female`.
172
- - `class_type`: a classification label, with possible values including `about`, `partner`, `self`.
173
- - `turker_gender`: a classification label, with possible values including `man`, `woman`, `nonbinary`, `prefer not to say`, `no answer`.
 
 
174
 
175
- For the other annotated datasets:
176
  - `text`: the text to be classified.
177
- - `gender`: a classification label, with possible values including `gender-neutral`, `female`, `male`.
 
 
178
 
179
- For the `_inferred` configurations:
 
 
 
 
 
 
 
 
 
 
 
180
  - `text`: the text to be classified.
181
  - `binary_label`: a classification label, with possible values including `ABOUT:female`, `ABOUT:male`.
182
- - `binary_score`: a score between 0 and 1.
183
  - `ternary_label`: a classification label, with possible values including `ABOUT:female`, `ABOUT:male`, `ABOUT:gender-neutral`.
184
- - `ternary_score`: a score between 0 and 1.
 
 
 
 
 
 
 
 
 
185
 
186
  ### Data Splits
187
 
@@ -192,64 +264,127 @@ The different parts of the data can be accessed through the different configurat
192
  - `image_chat`: sentences about images annotated with ABOUT gender based on gender information from the entities in the image
193
  - `convai2_inferred`, `light_inferred`, `opensubtitles_inferred`, `yelp_inferred`: Data from several source datasets with ABOUT annotations inferred by a trined classifier.
194
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
195
 
196
  ## Dataset Creation
197
 
198
  ### Curation Rationale
199
 
200
- [Needs More Information]
201
 
202
  ### Source Data
203
 
204
  #### Initial Data Collection and Normalization
205
 
206
- [Needs More Information]
207
 
208
  #### Who are the source language producers?
209
 
210
- [Needs More Information]
 
 
 
 
 
 
 
211
 
212
  ### Annotations
213
 
214
  #### Annotation process
215
 
216
- [Needs More Information]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
217
 
218
  #### Who are the annotators?
219
 
220
- [Needs More Information]
221
 
222
  ### Personal and Sensitive Information
223
 
224
- [Needs More Information]
225
 
226
  ## Considerations for Using the Data
227
 
228
  ### Social Impact of Dataset
229
 
230
- [Needs More Information]
231
 
232
  ### Discussion of Biases
233
 
234
- [Needs More Information]
 
 
235
 
236
  ### Other Known Limitations
237
 
238
- [Needs More Information]
239
 
240
  ## Additional Information
241
 
242
  ### Dataset Curators
243
 
244
- [Needs More Information]
245
 
246
  ### Licensing Information
247
 
248
- [Needs More Information]
249
 
250
  ### Citation Information
251
 
252
- [Needs More Information]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
253
  ### Contributions
254
 
255
- Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset.
 
129
 
130
  ## Dataset Description
131
 
132
+ - **Homepage:** [ParlAI MD Gender Project Page](https://parl.ai/projects/md_gender/)
133
+ - **Repository:** [ParlAI Github MD Gender Repository](https://github.com/facebookresearch/ParlAI/tree/master/projects/md_gender)
134
+ - **Paper:** [Multi-Dimensional Gender Bias Classification](https://www.aclweb.org/anthology/2020.emnlp-main.23.pdf)
135
  - **Leaderboard:** [Needs More Information]
136
  - **Point of Contact:** [email protected]
137
 
138
  ### Dataset Summary
139
 
140
+ The Multi-Dimensional Gender Bias Classification dataset is based on a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions: bias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker. It contains seven large scale datasets automatically annotated for gender information (there are eight in the original project but the Wikipedia set is not included in the HuggingFace distribution), one crowdsourced evaluation benchmark of utterance-level gender rewrites, a list of gendered names, and a list of gendered words in English.
 
 
 
 
 
 
 
 
141
 
142
  ### Supported Tasks and Leaderboards
143
 
144
+ - `text-classification-other-gender-bias`: The dataset can be used to train a model for classification of various kinds of gender bias. The model performance is evaluated based on the accuracy of the predicted labels as compared to the given labels in the dataset. Dinan et al's (2020) Transformer model achieved an average of 67.13% accuracy in binary gender prediction across the ABOUT, TO, and AS tasks. See the paper for more results.
145
 
146
  ### Languages
147
 
148
+ The data is in English as spoken on the various sites where the data was collected. The associated BCP-47 code `en`.
149
 
150
  ## Dataset Structure
151
 
152
  ### Data Instances
153
 
154
+ The following are examples of data instances from the various configs in the dataset. See the [MD Gender Bias dataset viewer](https://huggingface.co/datasets/viewer/?dataset=md_gender_bias) to explore more examples.
155
+
156
+ An example from the `new_data` config:
157
+ ```
158
+ {'class_type': 0,
159
+ 'confidence': 'certain',
160
+ 'episode_done': True,
161
+ 'labels': [1],
162
+ 'original': 'She designed monumental Loviisa war cemetery in 1920',
163
+ 'text': 'He designed monumental Lovissa War Cemetery in 1920.',
164
+ 'turker_gender': 4}
165
+ ```
166
+
167
+ An example from the `funpedia` config:
168
+ ```
169
+ {'gender': 2,
170
+ 'persona': 'Humorous',
171
+ 'text': 'Max Landis is a comic book writer who wrote Chronicle, American Ultra, and Victor Frankestein.',
172
+ 'title': 'Max Landis'}
173
+ ```
174
+
175
+ An example from the `image_chat` config:
176
+ ```
177
+ {'caption': '<start> a young girl is holding a pink umbrella in her hand <eos>',
178
+ 'female': True,
179
+ 'id': '2923e28b6f588aff2d469ab2cccfac57',
180
+ 'male': False}
181
+ ```
182
+
183
+ An example from the `wizard` config:
184
+ ```
185
+ {'chosen_topic': 'Krav Maga',
186
+ 'gender': 2,
187
+ 'text': 'Hello. I hope you might enjoy or know something about Krav Maga?'}
188
+ ```
189
+
190
+ An example from the `convai2_inferred` config (the other `_inferred` configs have the same fields, with the exception of `yelp_inferred`, which does not have the `ternary_label` or `ternary_score` fields):
191
+ ```
192
+ {'binary_label': 1,
193
+ 'binary_score': 0.6521999835968018,
194
+ 'ternary_label': 2,
195
+ 'ternary_score': 0.4496000111103058,
196
+ 'text': "hi , how are you doing ? i'm getting ready to do some cheetah chasing to stay in shape ."}
197
+ ```
198
+
199
+ An example from the `gendered_words` config:
200
+ ```
201
+ {'word_feminine': 'countrywoman',
202
+ 'word_masculine': 'countryman'}
203
+ ```
204
+
205
+ An example from the `name_genders` config:
206
+ ```
207
+ {'assigned_gender': 1,
208
+ 'count': 7065,
209
+ 'name': 'Mary'}
210
+ ```
211
 
212
  ### Data Fields
213
 
214
+ The following are the features for each of the configs.
215
 
216
  For the `new_data` config:
217
  - `text`: the text to be classified
218
  - `original`: the text before reformulation
219
  - `labels`: a `list` of classification labels, with possible values including `ABOUT:female`, `ABOUT:male`, `PARTNER:female`, `PARTNER:male`, `SELF:female`.
220
+ - `class_type`: a classification label, with possible values including `about` (0), `partner` (1), `self` (2).
221
+ - `turker_gender`: a classification label, with possible values including `man` (0), `woman` (1), `nonbinary` (2), `prefer not to say` (3), `no answer` (4).
222
+ - `episode_done`: a boolean indicating whether the conversation was completed.
223
+ - `confidence`: a string indicating the confidence of the annotator in response to the instance label being ABOUT/TO/AS a man or woman. Possible values are `certain`, `pretty sure`, and `unsure`.
224
 
225
+ For the `funpedia` config:
226
  - `text`: the text to be classified.
227
+ - `gender`: a classification label, with possible values including `gender-neutral` (0), `female` (1), `male` (2), indicating the gender of the person being talked about.
228
+ - `persona`: a string describing the persona assigned to the user when talking about the entity.
229
+ - `title`: a string naming the entity the text is about.
230
 
231
+ For the `image_chat` config:
232
+ - `caption`: a string description of the contents of the original image.
233
+ - `female`: a boolean indicating whether the gender of the person being talked about is female, if the image contains a person.
234
+ - `id`: a string indicating the id of the image.
235
+ - `male`: a boolean indicating whether the gender of the person being talked about is male, if the image contains a person.
236
+
237
+ For the `wizard` config:
238
+ - `text`: the text to be classified.
239
+ - `chosen_topic`: a string indicating the topic of the text.
240
+ - `gender`: a classification label, with possible values including `gender-neutral` (0), `female` (1), `male` (2), indicating the gender of the person being talked about.
241
+
242
+ For the `_inferred` configurations (again, except the `yelp_inferred` split, which does not have the `ternary_label` or `ternary_score` fields):
243
  - `text`: the text to be classified.
244
  - `binary_label`: a classification label, with possible values including `ABOUT:female`, `ABOUT:male`.
245
+ - `binary_score`: a float indicating a score between 0 and 1.
246
  - `ternary_label`: a classification label, with possible values including `ABOUT:female`, `ABOUT:male`, `ABOUT:gender-neutral`.
247
+ - `ternary_score`: a float indicating a score between 0 and 1.
248
+
249
+ For the word list:
250
+ - `word_masculine`: a string indicating the masculine version of the word.
251
+ - `word_feminine`: a string indicating the feminine version of the word.
252
+
253
+ For the gendered name list:
254
+ - `assigned_gender`: an integer, 1 for female, 0 for male.
255
+ - `count`: an integer.
256
+ - `name`: a string of the name.
257
 
258
  ### Data Splits
259
 
 
264
  - `image_chat`: sentences about images annotated with ABOUT gender based on gender information from the entities in the image
265
  - `convai2_inferred`, `light_inferred`, `opensubtitles_inferred`, `yelp_inferred`: Data from several source datasets with ABOUT annotations inferred by a trined classifier.
266
 
267
+ | Split | M | F | N | U | Dimension |
268
+ | ---------- | ---- | --- | ---- | ---- | --------- |
269
+ | Image Chat | 39K | 15K | 154K | - | ABOUT |
270
+ | Funpedia | 19K | 3K | 1K | - | ABOUT |
271
+ | Wizard | 6K | 1K | 1K | - | ABOUT |
272
+ | Yelp | 1M | 1M | - | - | AS |
273
+ | ConvAI2 | 22K | 22K | - | 86K | AS |
274
+ | ConvAI2 | 22K | 22K | - | 86K | TO |
275
+ | OpenSub | 149K | 69K | - | 131K | AS |
276
+ | OpenSub | 95K | 45K | - | 209K | TO |
277
+ | LIGHT | 13K | 8K | - | 83K | AS |
278
+ | LIGHT | 13K | 8K | - | 83K | TO |
279
+ | ---------- | ---- | --- | ---- | ---- | --------- |
280
+ | MDGender | 384 | 401 | - | - | ABOUT |
281
+ | MDGender | 396 | 371 | - | - | AS |
282
+ | MDGender | 411 | 382 | - | - | TO |
283
 
284
  ## Dataset Creation
285
 
286
  ### Curation Rationale
287
 
288
+ The curators chose to annotate the existing corpora to make their classifiers reliable on all dimensions (ABOUT/TO/AS) and across multiple domains. However, none of the existing datasets cover all three dimensions at the same time, and many of the gender labels are noisy. To enable reliable evaluation, the curators collected a specialized corpus, found in the `new_data` config, which acts as a gold-labeled dataset for the masculine and feminine classes.
289
 
290
  ### Source Data
291
 
292
  #### Initial Data Collection and Normalization
293
 
294
+ For the `new_data` config, the curators collected conversations between two speakers. Each speaker was provided with a persona description containing gender information, then tasked with adopting that persona and having a conversation. They were also provided with small sections of a biography from Wikipedia as the conversation topic in order to encourage crowdworkers to discuss ABOUT/TO/AS gender information. To ensure there is ABOUT/TO/AS gender information contained in each utterance, the curators asked a second set of annotators to rewrite each utterance to make it very clear that they are speaking ABOUT a man or a woman, speaking AS a man or a woman, and speaking TO a man or a woman.
295
 
296
  #### Who are the source language producers?
297
 
298
+ This dataset was collected from crowdworkers from Amazon’s Mechanical Turk. All workers are English-speaking and located in the United States.
299
+
300
+ | Reported Gender | Percent of Total |
301
+ | ----------------- | ---------------- |
302
+ | Man | 67.38 |
303
+ | Woman | 18.34 |
304
+ | Non-binary | 0.21 |
305
+ | Prefer not to say | 14.07 |
306
 
307
  ### Annotations
308
 
309
  #### Annotation process
310
 
311
+ For the `new_data` config, annotators were asked to label how confident they are that someone else could predict the given gender label, allowing for flexibility between explicit genderedness (like the use of "he" or "she") and statistical genderedness.
312
+
313
+ Many of the annotated datasets contain cases where the ABOUT, AS, TO labels are not provided (i.e. unknown). In such instances, the curators apply one of two strategies. They apply the imputation strategy for data for which the ABOUT label is unknown using a classifier trained only on other Wikipedia data for which this label is provided. Data without a TO or AS label was assigned one at random, choosing between masculine and feminine with equal probability. Details of how each of the eight training datasets was annotated are as follows:
314
+
315
+ 1. Wikipedia- to annotate ABOUT, the curators used a Wikipedia dump and extract biography pages using named entity recognition. They labeled pages with a gender based on the number of gendered pronouns (he vs. she vs. they) and labeled each paragraph in the page with this label for the ABOUT dimension.
316
+
317
+ 2. Funpedia- Funpedia ([Miller et al., 2017](https://www.aclweb.org/anthology/D17-2014/)) contains rephrased Wikipedia sentences in a more conversational way. The curators retained only biography related sentences and annotate similar to Wikipedia, to give ABOUT labels.
318
+
319
+ 3. Wizard of Wikipedia- [Wizard of Wikipedia](https://parl.ai/projects/wizard_of_wikipedia/) contains two people discussing a topic in Wikipedia. The curators retain only the conversations on Wikipedia biographies and annotate to create ABOUT labels.
320
+
321
+ 4. ImageChat- [ImageChat](https://klshuster.github.io/image_chat/) contains conversations discussing the contents of an image. The curators used the [Xu et al. image captioning system](https://github.com/AaronCCWong/Show-Attend-and-Tell) to identify the contents of an image and select gendered examples.
322
+
323
+ 5. Yelp- The curators used the Yelp reviewer gender predictor developed by ([Subramanian et al., 2018](https://arxiv.org/pdf/1811.00552.pdf)) and retain reviews for which the classifier is very confident – this creates labels for the content creator of the review (AS). They impute ABOUT labels on this dataset using a classifier trained on the datasets 1-4.
324
+
325
+ 6. ConvAI2- [ConvAI2](https://parl.ai/projects/convai2/) contains persona-based conversations. Many personas contain sentences such as 'I am a old woman' or 'My name is Bob' which allows annotators to annotate the gender of the speaker (AS) and addressee (TO) with some confidence. Many of the personas have unknown gender. The curators impute ABOUT labels on this dataset using a classifier trained on the datasets 1-4.
326
+
327
+ 7. OpenSubtitles- [OpenSubtitles](http://www.opensubtitles.org/) contains subtitles for movies in different languages. The curators retained English subtitles that contain a character name or identity. They annotated the character’s gender using gender kinship terms such as daughter and gender probability distribution calculated by counting the masculine and feminine names of baby names in the United States. Using the character’s gender, they produced labels for the AS dimension. They produced labels for the TO dimension by taking the gender of the next character to speak if there is another utterance in the conversation; otherwise, they take the gender of the last character to speak. They impute ABOUT labels on this dataset using a classifier trained on the datasets 1-4.
328
+
329
+ 8. LIGHT- [LIGHT](https://parl.ai/projects/light/) contains persona-based conversation. Similarly to ConvAI2, annotators labeled the gender of each persona, giving labels for the speaker (AS) and speaking partner (TO). The curators impute ABOUT labels on this dataset using a classifier trained on the datasets 1-4.
330
 
331
  #### Who are the annotators?
332
 
333
+ This dataset was annotated by crowdworkers from Amazon’s Mechanical Turk. All workers are English-speaking and located in the United States.
334
 
335
  ### Personal and Sensitive Information
336
 
337
+ For privacy reasons the curators did not associate the self-reported gender of the annotator with the labeled examples in the dataset and only report these statistics in aggregate.
338
 
339
  ## Considerations for Using the Data
340
 
341
  ### Social Impact of Dataset
342
 
343
+ This dataset is intended for applications such as controlling for gender bias in generative models, detecting gender bias in arbitrary text, and classifying text as offensive based on its genderedness.
344
 
345
  ### Discussion of Biases
346
 
347
+ Over two thirds of annotators identified as men, which may introduce biases into the dataset.
348
+
349
+ Wikipedia is also well known to have gender bias in equity of biographical coverage and lexical bias in noun references to women (see the paper's appendix for citations).
350
 
351
  ### Other Known Limitations
352
 
353
+ The limitations of the Multi-Dimensional Gender Bias Classification dataset have not yet been investigated, but the curators acknowledge that more work is required to address the intersectionality of gender identities, i.e., when gender non-additively interacts with other identity characteristics. The curators point out that negative gender stereotyping is known to be alternatively weakened or reinforced by the presence of social attributes like dialect, class and race and that these differences have been found to affect gender classification in images and sentences encoders. See the paper for references.
354
 
355
  ## Additional Information
356
 
357
  ### Dataset Curators
358
 
359
+ Emily Dinan, Angela Fan, Ledell Wu, Jason Weston, Douwe Kiela, and Adina Williams at Facebook AI Research. Angela Fan is also affiliated with Laboratoire Lorrain d’Informatique et Applications (LORIA).
360
 
361
  ### Licensing Information
362
 
363
+ The Multi-Dimensional Gender Bias Classification dataset is licensed under the [MIT License](https://opensource.org/licenses/MIT).
364
 
365
  ### Citation Information
366
 
367
+ ```
368
+ @inproceedings{dinan-etal-2020-multi,
369
+ title = "Multi-Dimensional Gender Bias Classification",
370
+ author = "Dinan, Emily and
371
+ Fan, Angela and
372
+ Wu, Ledell and
373
+ Weston, Jason and
374
+ Kiela, Douwe and
375
+ Williams, Adina",
376
+ booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
377
+ month = nov,
378
+ year = "2020",
379
+ address = "Online",
380
+ publisher = "Association for Computational Linguistics",
381
+ url = "https://www.aclweb.org/anthology/2020.emnlp-main.23",
382
+ doi = "10.18653/v1/2020.emnlp-main.23",
383
+ pages = "314--331",
384
+ abstract = "Machine learning models are trained to find patterns in data. NLP models can inadvertently learn socially undesirable patterns when training on gender biased text. In this work, we propose a novel, general framework that decomposes gender bias in text along several pragmatic and semantic dimensions: bias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker. Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information. In addition, we collect a new, crowdsourced evaluation benchmark. Distinguishing between gender bias along multiple dimensions enables us to train better and more fine-grained gender bias classifiers. We show our classifiers are valuable for a variety of applications, like controlling for gender bias in generative models, detecting gender bias in arbitrary text, and classifying text as offensive based on its genderedness.",
385
+ }
386
+ ```
387
+
388
  ### Contributions
389
 
390
+ Thanks to [@yjernite](https://github.com/yjernite) and [@mcmillanmajora](https://github.com/mcmillanmajora)for adding this dataset.