Commit
•
b88791f
0
Parent(s):
Update files from the datasets library (from 1.2.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.2.0
- .gitattributes +27 -0
- README.md +191 -0
- dataset_infos.json +1 -0
- dummy/0.0.0/dummy_data.zip +3 -0
- hind_encorp.py +120 -0
.gitattributes
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,191 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- expert-generated
|
4 |
+
language_creators:
|
5 |
+
- crowdsourced
|
6 |
+
- machine-generated
|
7 |
+
languages:
|
8 |
+
- en
|
9 |
+
- hi
|
10 |
+
licenses:
|
11 |
+
- cc-by-nc-sa-3-0
|
12 |
+
multilinguality:
|
13 |
+
- translation
|
14 |
+
size_categories:
|
15 |
+
- n<1K
|
16 |
+
source_datasets:
|
17 |
+
- original
|
18 |
+
task_categories:
|
19 |
+
- conditional-text-generation
|
20 |
+
task_ids:
|
21 |
+
- machine-translation
|
22 |
+
---
|
23 |
+
|
24 |
+
# Dataset Card for HindEnCorp
|
25 |
+
|
26 |
+
## Table of Contents
|
27 |
+
- [Dataset Description](#dataset-description)
|
28 |
+
- [Dataset Summary](#dataset-summary)
|
29 |
+
- [Supported Tasks](#supported-tasks-and-leaderboards)
|
30 |
+
- [Languages](#languages)
|
31 |
+
- [Dataset Structure](#dataset-structure)
|
32 |
+
- [Data Instances](#data-instances)
|
33 |
+
- [Data Fields](#data-instances)
|
34 |
+
- [Data Splits](#data-instances)
|
35 |
+
- [Dataset Creation](#dataset-creation)
|
36 |
+
- [Curation Rationale](#curation-rationale)
|
37 |
+
- [Source Data](#source-data)
|
38 |
+
- [Annotations](#annotations)
|
39 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
40 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
41 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
42 |
+
- [Discussion of Biases](#discussion-of-biases)
|
43 |
+
- [Other Known Limitations](#other-known-limitations)
|
44 |
+
- [Additional Information](#additional-information)
|
45 |
+
- [Dataset Curators](#dataset-curators)
|
46 |
+
- [Licensing Information](#licensing-information)
|
47 |
+
- [Citation Information](#citation-information)
|
48 |
+
|
49 |
+
## Dataset Description
|
50 |
+
|
51 |
+
- **Homepage:https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0023-625F-0**
|
52 |
+
- **Repository:https://lindat.mff.cuni.cz/repository/xmlui/**
|
53 |
+
- **Paper:http://www.lrec-conf.org/proceedings/lrec2014/pdf/835_Paper.pdf**
|
54 |
+
- **Leaderboard:**
|
55 |
+
- **Point of Contact:**
|
56 |
+
|
57 |
+
### Dataset Summary
|
58 |
+
|
59 |
+
HindEnCorp parallel texts (sentence-aligned) come from the following sources:
|
60 |
+
Tides, which contains 50K sentence pairs taken mainly from news articles. This dataset was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008).
|
61 |
+
|
62 |
+
Commentaries by Daniel Pipes contain 322 articles in English written by a journalist Daniel Pipes and translated into Hindi.
|
63 |
+
|
64 |
+
EMILLE. This corpus (Baker et al., 2002) consists of three components: monolingual, parallel and annotated corpora. There are fourteen monolingual sub- corpora, including both written and (for some lan- guages) spoken data for fourteen South Asian lan- guages. The EMILLE monolingual corpora contain in total 92,799,000 words (including 2,627,000 words of transcribed spoken data for Bengali, Gujarati, Hindi, Punjabi and Urdu). The parallel corpus consists of 200,000 words of text in English and its accompanying translations into Hindi and other languages.
|
65 |
+
|
66 |
+
Smaller datasets as collected by Bojar et al. (2010) include the corpus used at ACL 2005 (a subcorpus of EMILLE), a corpus of named entities from Wikipedia (crawled in 2009), and Agriculture domain parallel corpus.
|
67 |
+

|
68 |
+
For the current release, we are extending the parallel corpus using these sources:
|
69 |
+
Intercorp (Čermák and Rosen,2012) is a large multilingual parallel corpus of 32 languages including Hindi. The central language used for alignment is Czech. Intercorp’s core texts amount to 202 million words. These core texts are most suitable for us because their sentence alignment is manually checked and therefore very reliable. They cover predominately short sto- ries and novels. There are seven Hindi texts in Inter- corp. Unfortunately, only for three of them the English translation is available; the other four are aligned only with Czech texts. The Hindi subcorpus of Intercorp contains 118,000 words in Hindi.
|
70 |
+
|
71 |
+
TED talks 3 held in various languages, primarily English, are equipped with transcripts and these are translated into 102 languages. There are 179 talks for which Hindi translation is available.
|
72 |
+
|
73 |
+
The Indic multi-parallel corpus (Birch et al., 2011; Post et al., 2012) is a corpus of texts from Wikipedia translated from the respective Indian language into English by non-expert translators hired over Mechanical Turk. The quality is thus somewhat mixed in many respects starting from typesetting and punctuation over capi- talization, spelling, word choice to sentence structure. A little bit of control could be in principle obtained from the fact that every input sentence was translated 4 times. We used the 2012 release of the corpus.
|
74 |
+
|
75 |
+
Launchpad.net is a software collaboration platform that hosts many open-source projects and facilitates also collaborative localization of the tools. We downloaded all revisions of all the hosted projects and extracted the localization (.po) files.
|
76 |
+
|
77 |
+
Other smaller datasets. This time, we added Wikipedia entities as crawled in 2013 (including any morphological variants of the named entitity that appears on the Hindi variant of the Wikipedia page) and words, word examples and quotes from the Shabdkosh online dictionary.
|
78 |
+
|
79 |
+
### Supported Tasks and Leaderboards
|
80 |
+
|
81 |
+
[More Information Needed]
|
82 |
+
|
83 |
+
### Languages
|
84 |
+
|
85 |
+
Hindi, English
|
86 |
+
|
87 |
+
## Dataset Structure
|
88 |
+
|
89 |
+
### Data Instances
|
90 |
+
|
91 |
+
[More Information Needed]
|
92 |
+
|
93 |
+
### Data Fields
|
94 |
+
|
95 |
+
HindEncorp Columns:
|
96 |
+
|
97 |
+
- source identifier (where do the segments come from)
|
98 |
+
- alignment type (number of English segments - number of Hindi segments)
|
99 |
+
- alignment quality, which is one of the following:
|
100 |
+
"manual" ... for sources that were sentence-aligned manually
|
101 |
+
"implied" ... for sources where one side was constructed by translating
|
102 |
+
segment by segment
|
103 |
+
float ... a value somehow reflecting the goodness of the automatic
|
104 |
+
alignment; not really reliable
|
105 |
+
- English segment or segments
|
106 |
+
- Hindi segment or segments
|
107 |
+
|
108 |
+
Each of the segments field is in the plaintext or export format as described
|
109 |
+
above.
|
110 |
+
|
111 |
+
If there are more than one segments on a line (e.g. for lines with alignment
|
112 |
+
type 2-1 where there are two English segments), then the segments are delimited
|
113 |
+
with `<s>` in the text field.
|
114 |
+
|
115 |
+
### Data Splits
|
116 |
+
|
117 |
+
[More Information Needed]
|
118 |
+
|
119 |
+
## Dataset Creation
|
120 |
+
|
121 |
+
### Source Data
|
122 |
+
|
123 |
+
[More Information Needed]
|
124 |
+
|
125 |
+
#### Initial Data Collection and Normalization
|
126 |
+
|
127 |
+
[More Information Needed]
|
128 |
+
|
129 |
+
#### Who are the source language producers?
|
130 |
+
|
131 |
+
Daniel Pipes,Baker,Bojar,"Čermák and Rosen,2012","Birch et al., 2011; Post et al., 2012"
|
132 |
+
|
133 |
+
### Annotations
|
134 |
+
|
135 |
+
#### Annotation process
|
136 |
+
|
137 |
+
the 1st part of data TIDES was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008).
|
138 |
+
#### Who are the annotators?
|
139 |
+
|
140 |
+
[More Information Needed]
|
141 |
+
|
142 |
+
### Personal and Sensitive Information
|
143 |
+
|
144 |
+
[More Information Needed]
|
145 |
+
|
146 |
+
## Considerations for Using the Data
|
147 |
+
|
148 |
+
### Social Impact of Dataset
|
149 |
+
|
150 |
+
[More Information Needed]
|
151 |
+
|
152 |
+
### Discussion of Biases
|
153 |
+
|
154 |
+
[More Information Needed]
|
155 |
+
|
156 |
+
### Other Known Limitations
|
157 |
+
|
158 |
+
[More Information Needed]
|
159 |
+
|
160 |
+
## Additional Information
|
161 |
+
|
162 |
+
### Dataset Curators
|
163 |
+
|
164 |
+
Bojar, Ondřej ; Diatka, Vojtěch ; Straňák, Pavel ; Tamchyna, Aleš ; Zeman, Daniel
|
165 |
+
|
166 |
+
### Licensing Information
|
167 |
+
|
168 |
+
CC BY-NC-SA 3.0
|
169 |
+
|
170 |
+
### Citation Information
|
171 |
+
|
172 |
+
@InProceedings{hindencorp05:lrec:2014,
|
173 |
+
author = {Ond{\v{r}}ej Bojar and Vojt{\v{e}}ch Diatka
|
174 |
+
and Pavel Rychl{\'{y}} and Pavel Stra{\v{n}}{\'{a}}k
|
175 |
+
and V{\'{\i}}t Suchomel and Ale{\v{s}} Tamchyna and Daniel Zeman},
|
176 |
+
title = "{HindEnCorp - Hindi-English and Hindi-only Corpus for Machine
|
177 |
+
Translation}",
|
178 |
+
booktitle = {Proceedings of the Ninth International Conference on Language
|
179 |
+
Resources and Evaluation (LREC'14)},
|
180 |
+
year = {2014},
|
181 |
+
month = {may},
|
182 |
+
date = {26-31},
|
183 |
+
address = {Reykjavik, Iceland},
|
184 |
+
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and
|
185 |
+
Thierry Declerck and Hrafn Loftsson and Bente Maegaard and Joseph Mariani
|
186 |
+
and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
|
187 |
+
publisher = {European Language Resources Association (ELRA)},
|
188 |
+
isbn = {978-2-9517408-8-4},
|
189 |
+
language = {english}
|
190 |
+
}
|
191 |
+
|
dataset_infos.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"default": {"description": "HindEnCorp parallel texts (sentence-aligned) come from the following sources:\nTides, which contains 50K sentence pairs taken mainly from news articles. This dataset was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008).\n\nCommentaries by Daniel Pipes contain 322 articles in English written by a journalist Daniel Pipes and translated into Hindi.\n\nEMILLE. This corpus (Baker et al., 2002) consists of three components: monolingual, parallel and annotated corpora. There are fourteen monolingual sub- corpora, including both written and (for some lan- guages) spoken data for fourteen South Asian lan- guages. The EMILLE monolingual corpora contain in total 92,799,000 words (including 2,627,000 words of transcribed spoken data for Bengali, Gujarati, Hindi, Punjabi and Urdu). The parallel corpus consists of 200,000 words of text in English and its accompanying translations into Hindi and other languages.\n\nSmaller datasets as collected by Bojar et al. (2010) include the corpus used at ACL 2005 (a subcorpus of EMILLE), a corpus of named entities from Wikipedia (crawled in 2009), and Agriculture domain parallel corpus.\n\ufffc\nFor the current release, we are extending the parallel corpus using these sources:\nIntercorp (\u010cerm\u00e1k and Rosen,2012) is a large multilingual parallel corpus of 32 languages including Hindi. The central language used for alignment is Czech. Intercorp\u2019s core texts amount to 202 million words. These core texts are most suitable for us because their sentence alignment is manually checked and therefore very reliable. They cover predominately short sto- ries and novels. There are seven Hindi texts in Inter- corp. Unfortunately, only for three of them the English translation is available; the other four are aligned only with Czech texts. The Hindi subcorpus of Intercorp contains 118,000 words in Hindi.\n\nTED talks 3 held in various languages, primarily English, are equipped with transcripts and these are translated into 102 languages. There are 179 talks for which Hindi translation is available.\n\nThe Indic multi-parallel corpus (Birch et al., 2011; Post et al., 2012) is a corpus of texts from Wikipedia translated from the respective Indian language into English by non-expert translators hired over Mechanical Turk. The quality is thus somewhat mixed in many respects starting from typesetting and punctuation over capi- talization, spelling, word choice to sentence structure. A little bit of control could be in principle obtained from the fact that every input sentence was translated 4 times. We used the 2012 release of the corpus.\n\nLaunchpad.net is a software collaboration platform that hosts many open-source projects and facilitates also collaborative localization of the tools. We downloaded all revisions of all the hosted projects and extracted the localization (.po) files.\n\nOther smaller datasets. This time, we added Wikipedia entities as crawled in 2013 (including any morphological variants of the named entitity that appears on the Hindi variant of the Wikipedia page) and words, word examples and quotes from the Shabdkosh online dictionary.\n", "citation": "@InProceedings{hindencorp05:lrec:2014,\n author = {Ond{\u000b{r}}ej Bojar and Vojt{\u000b{e}}ch Diatka\n and Pavel Rychl{'{y}} and Pavel Stra{\u000b{n}}{'{a}}k\n and V{'{}}t Suchomel and Ale{\u000b{s}} Tamchyna and Daniel Zeman},\n title = \"{HindEnCorp - Hindi-English and Hindi-only Corpus for Machine\n Translation}\",\n booktitle = {Proceedings of the Ninth International Conference on Language\n Resources and Evaluation (LREC'14)},\n year = {2014},\n month = {may},\n date = {26-31},\n address = {Reykjavik, Iceland},\n editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and\n Thierry Declerck and Hrafn Loftsson and Bente Maegaard and Joseph Mariani\n and Asuncion Moreno and Jan Odijk and Stelios Piperidis},\n publisher = {European Language Resources Association (ELRA)},\n isbn = {978-2-9517408-8-4},\n language = {english}\n}\n", "homepage": "https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0023-625F-0", "license": "CC BY-NC-SA 3.0", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "source": {"dtype": "string", "id": null, "_type": "Value"}, "alignment_type": {"dtype": "string", "id": null, "_type": "Value"}, "alignment_quality": {"dtype": "string", "id": null, "_type": "Value"}, "translation": {"languages": ["en", "hi"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "hind_encorp", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 78945714, "num_examples": 273885, "dataset_name": "hind_encorp"}}, "download_checksums": {"https://lindat.mff.cuni.cz/repository/xmlui/bitstream/handle/11858/00-097C-0000-0023-625F-0/hindencorp05.plaintext.gz?sequence=3&isAllowed=y": {"num_bytes": 23899723, "checksum": "a86260f5b09f3d3a9ab4ad102676e8602c32b2ad95619424aee575378ceb8792"}}, "download_size": 23899723, "post_processing_size": null, "dataset_size": 78945714, "size_in_bytes": 102845437}}
|
dummy/0.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b7c60bb3d38c03b45382aeeab7eab9929d72ad6e9331d24ab37217466fe56714
|
3 |
+
size 1270
|
hind_encorp.py
ADDED
@@ -0,0 +1,120 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
|
16 |
+
from __future__ import absolute_import, division, print_function
|
17 |
+
|
18 |
+
import datasets
|
19 |
+
|
20 |
+
|
21 |
+
_CITATION = """\
|
22 |
+
@InProceedings{hindencorp05:lrec:2014,
|
23 |
+
author = {Ond{\v{r}}ej Bojar and Vojt{\v{e}}ch Diatka
|
24 |
+
and Pavel Rychl{\'{y}} and Pavel Stra{\v{n}}{\'{a}}k
|
25 |
+
and V{\'{}}t Suchomel and Ale{\v{s}} Tamchyna and Daniel Zeman},
|
26 |
+
title = "{HindEnCorp - Hindi-English and Hindi-only Corpus for Machine
|
27 |
+
Translation}",
|
28 |
+
booktitle = {Proceedings of the Ninth International Conference on Language
|
29 |
+
Resources and Evaluation (LREC'14)},
|
30 |
+
year = {2014},
|
31 |
+
month = {may},
|
32 |
+
date = {26-31},
|
33 |
+
address = {Reykjavik, Iceland},
|
34 |
+
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and
|
35 |
+
Thierry Declerck and Hrafn Loftsson and Bente Maegaard and Joseph Mariani
|
36 |
+
and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
|
37 |
+
publisher = {European Language Resources Association (ELRA)},
|
38 |
+
isbn = {978-2-9517408-8-4},
|
39 |
+
language = {english}
|
40 |
+
}
|
41 |
+
"""
|
42 |
+
|
43 |
+
_DESCRIPTION = """\
|
44 |
+
HindEnCorp parallel texts (sentence-aligned) come from the following sources:
|
45 |
+
Tides, which contains 50K sentence pairs taken mainly from news articles. This dataset was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008).
|
46 |
+
|
47 |
+
Commentaries by Daniel Pipes contain 322 articles in English written by a journalist Daniel Pipes and translated into Hindi.
|
48 |
+
|
49 |
+
EMILLE. This corpus (Baker et al., 2002) consists of three components: monolingual, parallel and annotated corpora. There are fourteen monolingual sub- corpora, including both written and (for some lan- guages) spoken data for fourteen South Asian lan- guages. The EMILLE monolingual corpora contain in total 92,799,000 words (including 2,627,000 words of transcribed spoken data for Bengali, Gujarati, Hindi, Punjabi and Urdu). The parallel corpus consists of 200,000 words of text in English and its accompanying translations into Hindi and other languages.
|
50 |
+
|
51 |
+
Smaller datasets as collected by Bojar et al. (2010) include the corpus used at ACL 2005 (a subcorpus of EMILLE), a corpus of named entities from Wikipedia (crawled in 2009), and Agriculture domain parallel corpus.
|
52 |
+

|
53 |
+
For the current release, we are extending the parallel corpus using these sources:
|
54 |
+
Intercorp (Čermák and Rosen,2012) is a large multilingual parallel corpus of 32 languages including Hindi. The central language used for alignment is Czech. Intercorp’s core texts amount to 202 million words. These core texts are most suitable for us because their sentence alignment is manually checked and therefore very reliable. They cover predominately short sto- ries and novels. There are seven Hindi texts in Inter- corp. Unfortunately, only for three of them the English translation is available; the other four are aligned only with Czech texts. The Hindi subcorpus of Intercorp contains 118,000 words in Hindi.
|
55 |
+
|
56 |
+
TED talks 3 held in various languages, primarily English, are equipped with transcripts and these are translated into 102 languages. There are 179 talks for which Hindi translation is available.
|
57 |
+
|
58 |
+
The Indic multi-parallel corpus (Birch et al., 2011; Post et al., 2012) is a corpus of texts from Wikipedia translated from the respective Indian language into English by non-expert translators hired over Mechanical Turk. The quality is thus somewhat mixed in many respects starting from typesetting and punctuation over capi- talization, spelling, word choice to sentence structure. A little bit of control could be in principle obtained from the fact that every input sentence was translated 4 times. We used the 2012 release of the corpus.
|
59 |
+
|
60 |
+
Launchpad.net is a software collaboration platform that hosts many open-source projects and facilitates also collaborative localization of the tools. We downloaded all revisions of all the hosted projects and extracted the localization (.po) files.
|
61 |
+
|
62 |
+
Other smaller datasets. This time, we added Wikipedia entities as crawled in 2013 (including any morphological variants of the named entitity that appears on the Hindi variant of the Wikipedia page) and words, word examples and quotes from the Shabdkosh online dictionary.
|
63 |
+
"""
|
64 |
+
|
65 |
+
_HOMEPAGE = "https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0023-625F-0"
|
66 |
+
|
67 |
+
_LICENSE = "CC BY-NC-SA 3.0"
|
68 |
+
|
69 |
+
_URLs = "https://lindat.mff.cuni.cz/repository/xmlui/bitstream/handle/11858/00-097C-0000-0023-625F-0/hindencorp05.plaintext.gz?sequence=3&isAllowed=y"
|
70 |
+
|
71 |
+
|
72 |
+
class HindEncorp(datasets.GeneratorBasedBuilder):
|
73 |
+
"""Short description of my dataset."""
|
74 |
+
|
75 |
+
def _info(self):
|
76 |
+
|
77 |
+
features = datasets.Features(
|
78 |
+
{
|
79 |
+
"id": datasets.Value("string"),
|
80 |
+
"source": datasets.Value("string"),
|
81 |
+
"alignment_type": datasets.Value("string"),
|
82 |
+
"alignment_quality": datasets.Value("string"),
|
83 |
+
"translation": datasets.features.Translation(languages=["en", "hi"]),
|
84 |
+
}
|
85 |
+
)
|
86 |
+
|
87 |
+
return datasets.DatasetInfo(
|
88 |
+
description=_DESCRIPTION,
|
89 |
+
features=features,
|
90 |
+
supervised_keys=None,
|
91 |
+
homepage=_HOMEPAGE,
|
92 |
+
license=_LICENSE,
|
93 |
+
citation=_CITATION,
|
94 |
+
)
|
95 |
+
|
96 |
+
def _split_generators(self, dl_manager):
|
97 |
+
"""Returns SplitGenerators."""
|
98 |
+
|
99 |
+
data_dir = dl_manager.download_and_extract(_URLs)
|
100 |
+
|
101 |
+
return [
|
102 |
+
datasets.SplitGenerator(
|
103 |
+
name=datasets.Split.TRAIN,
|
104 |
+
gen_kwargs={"filepath": data_dir},
|
105 |
+
),
|
106 |
+
]
|
107 |
+
|
108 |
+
def _generate_examples(self, filepath):
|
109 |
+
""" Yields examples. """
|
110 |
+
|
111 |
+
with open(filepath, encoding="utf-8") as f:
|
112 |
+
for id_, line in enumerate(f):
|
113 |
+
splits = line.strip().split("\t")
|
114 |
+
yield id_, {
|
115 |
+
"id": str(id_),
|
116 |
+
"source": splits[0],
|
117 |
+
"alignment_type": splits[1],
|
118 |
+
"alignment_quality": splits[2],
|
119 |
+
"translation": {"en": splits[3], "hi": splits[4]},
|
120 |
+
}
|