Sasha Luccioni commited on
Commit
99761b6
1 Parent(s): 6ec304f

Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, Poem Sentiment (#4336)

Browse files

* Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, PiQA, Poem Sentiment, QAsper

* Update README.md

fixing header

* Update datasets/piqa/README.md

Co-authored-by: Quentin Lhoest <[email protected]>

* Update README.md

changing MSRA NER metric to `seqeval`

* Update README.md

removing ROUGE args

* Update README.md

removing duplicate information

* Update README.md

removing eval for now

* Update README.md

removing eval for now

Co-authored-by: sashavor <[email protected]>
Co-authored-by: Quentin Lhoest <[email protected]>

Commit from https://github.com/huggingface/datasets/commit/095d12ff7414df118f60e00cd6494299a881743a

Files changed (1) hide show
  1. README.md +25 -11
README.md CHANGED
@@ -18,6 +18,20 @@ source_datasets:
18
  task_categories:
19
  - automatic-speech-recognition
20
  task_ids: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  ---
22
 
23
  # Dataset Card for lj_speech
@@ -62,33 +76,33 @@ The texts were published between 1884 and 1964, and are in the public domain. Th
62
 
63
  ### Supported Tasks and Leaderboards
64
 
65
- The dataset can be used to train a model for Automatic Speech Recognition (ASR) or Text-to-Speech (TTS).
66
- - `other:automatic-speech-recognition`: An ASR model is presented with an audio file and asked to transcribe the audio file to written text.
67
- The most common ASR evaluation metric is the word error rate (WER).
68
- - `other:text-to-speech`: A TTS model is given a written text in natural language and asked to generate a speech audio file.
69
- A reasonable evaluation metric is the mean opinion score (MOS) of audio quality.
70
  The dataset has an active leaderboard which can be found at https://paperswithcode.com/sota/text-to-speech-synthesis-on-ljspeech
71
 
72
  ### Languages
73
 
74
- The transcriptions and audio are in English.
75
 
76
  ## Dataset Structure
77
 
78
  ### Data Instances
79
 
80
- A data point comprises the path to the audio file, called `file` and its transcription, called `text`.
81
  A normalized version of the text is also provided.
82
 
83
  ```
84
  {
85
- 'id': 'LJ002-0026',
86
- 'file': '/datasets/downloads/extracted/05bfe561f096e4c52667e3639af495226afe4e5d08763f2d76d069e7a453c543/LJSpeech-1.1/wavs/LJ002-0026.wav',
87
  'audio': {'path': '/datasets/downloads/extracted/05bfe561f096e4c52667e3639af495226afe4e5d08763f2d76d069e7a453c543/LJSpeech-1.1/wavs/LJ002-0026.wav',
88
  'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
89
  0.00091553, 0.00085449], dtype=float32),
90
  'sampling_rate': 22050},
91
- 'text': 'in the three years between 1813 and 1816,'
92
  'normalized_text': 'in the three years between eighteen thirteen and eighteen sixteen,',
93
  }
94
  ```
@@ -182,7 +196,7 @@ Some details about normalization:
182
 
183
  #### Who are the annotators?
184
 
185
- Recordings by Linda Johnson from LibriVox. Alignment and annotation by Keith Ito.
186
 
187
  ### Personal and Sensitive Information
188
 
 
18
  task_categories:
19
  - automatic-speech-recognition
20
  task_ids: []
21
+ train-eval-index:
22
+ - config: main
23
+ task: automatic-speech-recognition
24
+ task_id: speech_recognition
25
+ splits:
26
+ train_split: train
27
+ col_mapping:
28
+ file: path
29
+ text: text
30
+ metrics:
31
+ - type: wer
32
+ name: WER
33
+ - type: cer
34
+ name: CER
35
  ---
36
 
37
  # Dataset Card for lj_speech
 
76
 
77
  ### Supported Tasks and Leaderboards
78
 
79
+ The dataset can be used to train a model for Automatic Speech Recognition (ASR) or Text-to-Speech (TTS).
80
+ - `other:automatic-speech-recognition`: An ASR model is presented with an audio file and asked to transcribe the audio file to written text.
81
+ The most common ASR evaluation metric is the word error rate (WER).
82
+ - `other:text-to-speech`: A TTS model is given a written text in natural language and asked to generate a speech audio file.
83
+ A reasonable evaluation metric is the mean opinion score (MOS) of audio quality.
84
  The dataset has an active leaderboard which can be found at https://paperswithcode.com/sota/text-to-speech-synthesis-on-ljspeech
85
 
86
  ### Languages
87
 
88
+ The transcriptions and audio are in English.
89
 
90
  ## Dataset Structure
91
 
92
  ### Data Instances
93
 
94
+ A data point comprises the path to the audio file, called `file` and its transcription, called `text`.
95
  A normalized version of the text is also provided.
96
 
97
  ```
98
  {
99
+ 'id': 'LJ002-0026',
100
+ 'file': '/datasets/downloads/extracted/05bfe561f096e4c52667e3639af495226afe4e5d08763f2d76d069e7a453c543/LJSpeech-1.1/wavs/LJ002-0026.wav',
101
  'audio': {'path': '/datasets/downloads/extracted/05bfe561f096e4c52667e3639af495226afe4e5d08763f2d76d069e7a453c543/LJSpeech-1.1/wavs/LJ002-0026.wav',
102
  'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
103
  0.00091553, 0.00085449], dtype=float32),
104
  'sampling_rate': 22050},
105
+ 'text': 'in the three years between 1813 and 1816,'
106
  'normalized_text': 'in the three years between eighteen thirteen and eighteen sixteen,',
107
  }
108
  ```
 
196
 
197
  #### Who are the annotators?
198
 
199
+ Recordings by Linda Johnson from LibriVox. Alignment and annotation by Keith Ito.
200
 
201
  ### Personal and Sensitive Information
202