danielschnell commited on
Commit
6ce9ea3
1 Parent(s): 943c449

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -11,7 +11,7 @@ Significant advancements in the normalization and G2P (Grapheme-to-Phoneme) conv
11
 
12
  ## Dataset
13
 
14
- This dataset surpasses its predecessor in size, incorporating not only text from the relatively small Icelandic Wikipedia but also from the extensive Icelandic Gigaword corpus. Specifically, we have enriched the [Wikipedia text](https://repository.clarin.is/repository/xmlui/handle/20.500.12537/252) with material from the [News1 corpus](https://repository.clarin.is/repository/xmlui/handle/20.500.12537/237) corpus. To adhere to the maximum size limit of 512 MB for the raw text, we combined the complete Wikipedia text with randomly shuffled paragraphs from the News1 corpus until reaching the size cap.
15
 
16
  In total, the dataset contains `2,212,618` rows, each corresponding to a paragraph in the IGC corpus' XML format. This structure differs from the original dataset, where each row represented an entire Wikipedia article. This change accounts for the significantly increased row count. The dataset allows for merging of paragraphs belonging to the same original document, as the URL and title rows accurately identify their source and order.
17
 
@@ -25,4 +25,4 @@ For normalization, we adapted the [Regina Normalizer](https://github.com/grammat
25
 
26
  ### Phonemization
27
 
28
- Phonemization was conducted using [IceG2P](https://github.com/grammatek/ice-g2p), which is also based on a BI-LSTM model. We made adaptations to ensure the IPA phoneset output aligns with the overall phoneset used in other PL-Bert datasets. Initially, we created and refined a new vocabulary from both the Wikipedia and News1 corpora. Following this, the BI-LSTM model was employed to generate phonemes for the dictionary. We also enhanced stress labeling and incorporated secondary stresses after conducting compound analysis. A significant byproduct of this effort is a considerably improved G2P dictionary, which we plan to integrate into the G2P module and various other open-source projects involving Icelandic G2P.
 
11
 
12
  ## Dataset
13
 
14
+ This dataset surpasses its predecessor in size, incorporating not only text from the relatively small Icelandic Wikipedia but also from the extensive Icelandic Gigaword corpus. Specifically, we have enriched the [Wikipedia text](https://repository.clarin.is/repository/xmlui/handle/20.500.12537/252) with material from the [News1 corpus](https://repository.clarin.is/repository/xmlui/handle/20.500.12537/237). To adhere to the maximum size limit of 512 MB for the raw text, we combined the complete Wikipedia text with randomly shuffled paragraphs from the News1 corpus until reaching the size cap.
15
 
16
  In total, the dataset contains `2,212,618` rows, each corresponding to a paragraph in the IGC corpus' XML format. This structure differs from the original dataset, where each row represented an entire Wikipedia article. This change accounts for the significantly increased row count. The dataset allows for merging of paragraphs belonging to the same original document, as the URL and title rows accurately identify their source and order.
17
 
 
25
 
26
  ### Phonemization
27
 
28
+ Phonemization was conducted using [IceG2P](https://github.com/grammatek/ice-g2p), which is also based on a BI-LSTM model. We made adaptations to ensure the IPA phoneset output aligns with the overall phoneset used in other PL-Bert datasets. Initially, we created and refined a new vocabulary from both the Wikipedia and News1 corpora. Following this, the BI-LSTM model was employed to generate phonetic transcriptions for the dictionary. We also enhanced stress labeling and incorporated secondary stresses after conducting compound analysis. A significant byproduct of this effort is a considerably improved G2P dictionary, which we plan to integrate into the G2P module and various other open-source projects involving Icelandic G2P.