license: cc-by-nc-sa-4.0
task_categories:
- text-classification
language:
- en
tags:
- medical
pretty_name: medical-bios
size_categories:
- 1K<n<10K
Dataset Description
The dataset comprises English biographies labeled with occupations and binary genders. This is an occupation classification task, where bias concerning gender can be studied. It includes a subset of 10,000 biographies (8k train/1k dev/1k test) targeting 5 medical occupations (psychologist, surgeon, nurse, dentist, physician), derived from De-Arteaga et al. (2019). We collect and release human rationale annotations for a subset of 100 biographies in two different settings: non-contrastive and contrastive. In the former, the annotators were asked to find the rationale for the question: "Why is the person in the following short bio described as a L?", where L is the gold label occupation, e.g., nurse. In the latter, the question was "Why is the person in the following short bio described as an L rather than an F", where F (foil) is another medical occupation, e.g., physician.
You can read more details on the dataset and the annotation process in the paper Eberle et al. (2023).
Dataset Structure
We provide the standard
version of the dataset, where examples look as follows.
{
"text": "He has been a practicing Dentist for 20 years. He has done BDS. He is currently associated with Sree Sai Dental Clinic in Sowkhya Ayurveda Speciality Clinic, Chennai. ... ",
"label": 3,
}
and the newly curated subset of examples including human rationales, dubbed `rationales', where examples look as follows.
{
"text": "'She is currently practising at Dr Ravindra Ratolikar Dental Clinic in Narayanguda, Hyderabad.",
"label": 3,
"foil": 2,
"words": ['She', 'is', 'currently', 'practising', 'at', 'Dr', 'Ravindra', 'Ratolikar', 'Dental', 'Clinic', 'in', 'Narayanguda', ',', 'Hyderabad', '.']
"rationales": [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"contrastive_rationales": [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
"annotations": [[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ...]
"contrastive_annotations": [[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ...]
}
Use
To load the standard
version of the dataset:
from datasets import load_dataset
dataset = load_dataset("coastalcph/medical-bios", "standard")
To load the newly curated subset of examples with human rationales:
from datasets import load_dataset
dataset = load_dataset("coastalcph/medical-bios", "rationales")
Citation
@inproceedings{eberle-etal-2023-rather,
title = "Rather a Nurse than a Physician - Contrastive Explanations under Investigation",
author = "Eberle, Oliver and
Chalkidis, Ilias and
Cabello, Laura and
Brandl, Stephanie",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.427",
}