nb_samtale / README.md
ingerid's picture
docs: update dataset card.
e47cee1 unverified
|
raw
history blame
9.1 kB
metadata
language:
  - nb
  - nn
  - 'no'
license: cc0-1.0
task_categories:
  - automatic-speech-recognition
tags:
  - dialects
  - podcasts
  - live-events
  - conversational
  - speech

Dataset Card for Sprakbanken/nb_samtale

Dataset Description

Dataset Summary

NB Samtale is a speech corpus made by the Language Bank at the National Library of Norway. The corpus contains orthographically transcribed speech from podcasts and recordings of live events at the National Library. The corpus is intended as an open source dataset for Automatic Speech Recognition (ASR) development, and is specifically aimed at improving ASR systems’ handle on conversational speech.

The corpus consists of 12,080 segments, a total of 24 hours transcribed speech from 69 speakers. The corpus ensures both gender and dialect variation, and speakers from five broad dialect areas are represented. Both Bokmål and Nynorsk transcriptions are present in the corpus, with Nynorsk making up approximately 25% of the transcriptions.

We greatly appreciate feedback and suggestions for improvements.

Supported Tasks

  • Automatic Speech Recognition for verbatim transcriptions of conversational speech, as well as for standardised, orthographic transcriptions.
  • Speaker Diarization: The sentence segments all have a speaker ID, which is unique per speaker, and the same speaker will have the same speaker ID across source files.
  • Audio classification: Each segment could be classified with one of the metadata features.

Languages

The transcription texts are in either Norwegian bokmål or Norwegian nynorsk.

The audio is in Norwegian, in the speakers' respective dialects. We have categorized them into five dialect areas:

Dialect area (en) Dialect area (nb) Counties
Eastern Norway Østlandet Agder, Innlandet, Oslo, Vestfold og Telemark, Viken
Southwest Norway Sørvestlandet Rogaland
Western Norway Vestlandet Møre og Romsdal, Vestland
Central Norway Midt-Norge Trøndelag
Northern Norway Nord-Norge Nordland, Troms og Finnmark

Dataset Structure

Data Instances

A data point is an audio segment, including a relative path to the .wav-file, and the transcription. Additional information is provided about the speaker, the orthographic standard for the transcriptuion, whether the segment overlaps with the previous or next, and the setting for the recording.

{'source_file_id': 'nb-1',
 'segment_id': '0008970-0013860',
 'segment_order': 0,
 'duration': 4.89,
 'overlap_previous': False,
 'overlap_next': False,
 'speaker_id': 'P36',
 'gender': 1,
 'dialect': 0,
 'orthography': 0,
 'source_type': 0,
 'file_name': 'nb-1_0008970-0013860.wav',
 'transcription': 'hallo og velkommen hit til Nasjonalbiblioteket.',
 'audio': {
    'path': 'data/train/bm/nb-1_0008970-0013860.wav',
    'array': array([-0.00033569,  0.00222778, -0.0005188 , ...,  0.00057983,  0.0005188 ]),
    'sampling_rate': 16000}
  }

Data Fields

data field description Value type / example
source_file_id original file the segment appears in. (str) e.g. 50f-X, tr-X or nb-X, where X is a number.
segment_id segment start and end timestamp. {starttime}-{endtime} (str)
segment_order order of segment in the original file. (int)
duration duration of segment in seconds. (float)
overlap_previous whether the beginning of the segment overlaps with the previous segment True or False (bool)
overlap_next whether the end of the segment overlaps with the next segment. True or False (bool)
speaker_id speaker ID for the speaker transcribed in the segment. P0 - P69 (str)
gender speaker’s binary gender (female or male), mapped to a HuggingFace datasets ClassLabel index number 0: f or 1: m (int)
dialect the speaker’s dialect area, as a ClassLabel index number for the areas east (e), north (n), southwest (sw), central (t), west (w). 0: e, 1: n, 2: sw, 3: t, or 4: w (int)
orthography the written norm of the transcription, either bokmål (bm) or nynorsk (nn) as a ClassLabel index number 0: bm or 1: nn (int)
source_type type of recording of original file, either live-event or podcast, as a ClassLabel index number 0: live-event or 1: podcast (int)
file_name file name of the audio segment, without the path {source_file_id}_{segment_id}.wav (str)
transcription orthographic transcription text (str)
audio the audio segment data, with the relative file path, the bytes array, and the sampling_rate (dict)

Data Splits

The data is split into a train, validation, and test set, stratified on three parameters: source type, gender and dialect. Gender and dialect naturally refers to the gender and dialect of the speakers. The data has not been split on speaker ID to avoid speaker overlap in the various sets because this proved impossible while still maintaining a decent distribution of the other parameters, especially dialect variation.

The source type refers to whether the source material is one of the two podcasts (50f, tr) or a National Library live event (nb). The two types have different features. The podcasts are overall good quality studio recordings with little background noise, echo and such. The live events are recorded in rooms or reception halls at the National Library and have more background noise, echo and inconsistent audio quality. Many also have a live audience.

Dataset Creation

Source data

The audio is collected from podcasts we have been permitted to share openly – namely 50 forskere from UiT and Trondheim kommunes podkast from Trondheim municipality – as well as some of The National Library’s own recordings of live events. The podcasts are studio recordings, while the National Library events take place in rooms and reception halls at the National Library, sometimes in front of an audience.

Who are the source language producers?

Guests and hosts of the respective recording events, either podcasts produced in a studio or lectures, debates and conversations in a public live event.

Annotations

Annotation process

The recordings were segmented and transcribed in the transcription software ELAN. The recordings were transcribed automatically using a Norwegian ASR system created by the AI- lab at the National Library of Norway. The speech was segmented and transcribed with speaker diarization, separating the speakers into separate transcription tiers. These segments and transcriptions were then manually corrected by a transcriber according to a set of guidelines. All the manual transcriptions were reviewed by a second person in order to avoid substantial discrepancies between transcribers. Finally all the transcriptions were spell-checked, and checked for any unwanted numbers or special characters.

See the official dataset documentation for more details. The full set of guidelines for segmentation and transcription are given in Norwegian in NB_Samtale_transcription_guidelines.pdf.

Who are the annotators?

The Norwegian Language Bank (Språkbanken).

Personal and Sensitive Information

The data fields gender, dialect and speaker_id pertain to the speakers themselves. A single speaker will have the same speaker_id if they appear in several different source files.

Considerations for Using the Data

Discussion of Biases

The recordings were for the most part selected based on the gender and dialect of the speakers to ensure gender balance and broad dialectal representation. The corpus has a near 50/50 divide between male and female speakers (male 54%, female 46%). The Norwegian dialects have been divided into five broad dialect areas that are all represented in the corpus. However, Eastern Norwegian has the greatest representation at about 50% speaker time, while the other areas fall between 8% and 20% speaker time.

Additional Information

Dataset Curators

The content of the dataset was created by the Norwegian Language Bank (Språkbanken) at the National Library of Norway. Marie Iversdatter Røsok, Ingerid Løyning Dale and Per Erik Solberg contributed in creating this dataset. Thanks to the HuggingFace team for assistance.

Licensing Information

The NB Samtale dataset is released with the CC-ZERO-license, i.e., it is public domain and can be used for any purpose and reshared without permission.