File size: 3,863 Bytes
8b74fdf 75db6de 8b74fdf 75db6de 69f7795 75db6de 22d6dc4 75db6de |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
license: mit
language:
- en
tags:
- history
- holocaust
- oral testimonies
pretty_name: USHMM English Oral Testimonies Dataset
---
# Dataset Card for USHMM English Oral Testimonies Dataset
## Dataset Description
- **Homepage:** https://www.ushmm.org/collections/the-museums-collections/about/oral-history
### Dataset Summary
This is a collection of approximately 1,000 English Oral Testimonies at the United States Holocaust Memorial Museum (USHMM). The oral testimonies were collected during the late-twentieth and early twenty-first centuries. These were converted from PDFs into raw text with [Tesseract](https://github.com/tesseract-ocr/tesseract). The text was post-processed with a Python script to convert it into segments of dialogue. Because this process was automated, mistakes may remain. Occasionally, headers and footers appear in the middle of the dialogue. If found, submit an issue and these can be corrected.
This dataset was created during William J.B. Mattingly's postdoc at the Smithsonian Institution's Data Science Lab which had a cross-appointment with the USHMM. This dataset is being used for text classification, named entity recognition, and span categorization.
### Languages
These testimonies are strictly in English, but they were given by non-native speakers. This means foreign language words and phrases may appear throughout the testimonies.
## Dataset Structure
### Data Fields
- **rg:** String, the RG number used by the USHMM to identify specific items in a collection.
- **sequence:** Integer, the unique ID for the dialogue row.
- **text:** String, the actual piece of dialogue.
- **category:** String, can be a question or an answer.
### Data Splits
The dataset is not split into train, test, or validation sets.
## Dataset Creation
### Curation Rationale
The dataset was created to make the testimonies more accessible for various machine learning tasks. It is also the first publicly available dataset for Holocaust oral testimonies.
### Source Data
#### Initial Data Collection and Normalization
The initial data was collected from the United States Holocaust Memorial Museum's (USHMM) Oral Testimonies. These testimonies were converted from PDFs into raw text with Tesseract and then post-processed with a Python script to convert them into segments of dialogue.
#### Who are the source language producers?
The source language producers are the survivors of the Holocaust who shared their experiences during the Oral Testimonies collected by the USHMM.
### Personal and Sensitive Information
The dataset contains personal narratives and testimonies of Holocaust survivors which may include sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset provides invaluable insights into the experiences of Holocaust survivors. It can aid in historical studies, and also serve as a rich resource for Natural Language Processing tasks related to understanding dialogues, emotion, sentiment, and other semantic and syntactic features of language.
### Discussion of Biases
As the dataset is based on personal testimonies, it is subjective and can contain the personal biases of the people sharing their experiences.
### Other Known Limitations
Since the testimonies were converted from PDFs into raw text using Tesseract, there may be OCR errors. Also, as the testimonies were given by non-native English speakers, there can be instances of imprecise English and foreign language words or phrases.
## Additional Information
### Dataset Curators
The dataset was curated by [William J.B. Mattingly](https://github.com/wjbmattingly).
### Licensing Information
Forthcoming
### Citation Information
USHMM Oral Testimonies Dataset. Curated by William J.B. Mattingly.
### Contributions
If you wish to contribute, please feel free to submit an issue. |