erenfazlioglu commited on
Commit
d2c1c0a
1 Parent(s): 83dd8e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -1
README.md CHANGED
@@ -16,7 +16,50 @@ dataset_info:
16
  num_examples: 130634
17
  download_size: 5547933432
18
  dataset_size: 5933166725.824
 
 
 
 
 
 
 
 
 
19
  ---
 
20
  # Dataset Card for "turkishneuralvoice"
21
 
22
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  num_examples: 130634
17
  download_size: 5547933432
18
  dataset_size: 5933166725.824
19
+ tags:
20
+ - audio
21
+ - text-to-speech
22
+ - turkish
23
+ - synthetic-voice
24
+ language:
25
+ - tr
26
+ task_categories:
27
+ - text-to-speech
28
  ---
29
+
30
  # Dataset Card for "turkishneuralvoice"
31
 
32
+ ## Dataset Overview
33
+
34
+ **Dataset Name**: Turkish Neural Voice
35
+
36
+ **Description**: This dataset contains Turkish audio samples generated using Microsoft Text to Speech services. The dataset includes audio files and their corresponding transcriptions.
37
+
38
+ ## Dataset Structure
39
+
40
+ **Configs**:
41
+ - `default`
42
+
43
+ **Data Files**:
44
+ - Split: `train`
45
+ - Path: `data/train-*`
46
+
47
+ **Dataset Info**:
48
+ - Features:
49
+ - `audio`: Audio file
50
+ - `transcription`: Corresponding text transcription
51
+ - Splits:
52
+ - `train`
53
+ - Number of bytes: `5,933,166,725.824`
54
+ - Number of examples: `130,634`
55
+ - Download Size: `5,547,933,432` bytes
56
+ - Dataset Size: `5,933,166,725.824` bytes
57
+
58
+ ## Usage
59
+
60
+ To load this dataset in your Python environment using Hugging Face's `datasets` library, use the following code:
61
+
62
+ ```python
63
+ from datasets import load_dataset
64
+
65
+ dataset = load_dataset("path/to/dataset/turkishneuralvoice")