Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
1eafdd3
1 Parent(s): 70d285a

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,161 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- - xh
9
- license:
10
- - unknown
11
- multilinguality:
12
- - translation
13
- size_categories:
14
- - 10K<n<100K
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - translation
19
- task_ids: []
20
- paperswithcode_id: null
21
- pretty_name: OpusXhosanavy
22
- dataset_info:
23
- features:
24
- - name: translation
25
- dtype:
26
- translation:
27
- languages:
28
- - en
29
- - xh
30
- config_name: en-xh
31
- splits:
32
- - name: train
33
- num_bytes: 9654422
34
- num_examples: 49982
35
- download_size: 3263865
36
- dataset_size: 9654422
37
- ---
38
-
39
- # Dataset Card for [Dataset Name]
40
-
41
- ## Table of Contents
42
- - [Dataset Description](#dataset-description)
43
- - [Dataset Summary](#dataset-summary)
44
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
45
- - [Languages](#languages)
46
- - [Dataset Structure](#dataset-structure)
47
- - [Data Instances](#data-instances)
48
- - [Data Fields](#data-fields)
49
- - [Data Splits](#data-splits)
50
- - [Dataset Creation](#dataset-creation)
51
- - [Curation Rationale](#curation-rationale)
52
- - [Source Data](#source-data)
53
- - [Annotations](#annotations)
54
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
55
- - [Considerations for Using the Data](#considerations-for-using-the-data)
56
- - [Social Impact of Dataset](#social-impact-of-dataset)
57
- - [Discussion of Biases](#discussion-of-biases)
58
- - [Other Known Limitations](#other-known-limitations)
59
- - [Additional Information](#additional-information)
60
- - [Dataset Curators](#dataset-curators)
61
- - [Licensing Information](#licensing-information)
62
- - [Citation Information](#citation-information)
63
- - [Contributions](#contributions)
64
-
65
- ## Dataset Description
66
-
67
- - **Homepage:**[XhosaNavy](http://opus.nlpl.eu/XhosaNavy-v1.php)
68
- - **Repository:**
69
- - **Paper:**
70
- - **Leaderboard:**
71
- - **Point of Contact:**
72
-
73
- ### Dataset Summary
74
- This corpus is part of OPUS - the open collection of parallel corpora
75
- OPUS Website: http://opus.nlpl.eu
76
-
77
-
78
- ### Supported Tasks and Leaderboards
79
-
80
- The underlying task is machine translation from English to Xhosa
81
-
82
- ### Languages
83
-
84
- [More Information Needed]
85
-
86
- ## Dataset Structure
87
-
88
- ### Data Instances
89
-
90
- [More Information Needed]
91
-
92
- ### Data Fields
93
-
94
- [More Information Needed]
95
-
96
- ### Data Splits
97
-
98
- [More Information Needed]
99
-
100
- ## Dataset Creation
101
-
102
- ### Curation Rationale
103
-
104
- [More Information Needed]
105
-
106
- ### Source Data
107
-
108
- #### Initial Data Collection and Normalization
109
-
110
- [More Information Needed]
111
-
112
- #### Who are the source language producers?
113
-
114
- [More Information Needed]
115
-
116
- ### Annotations
117
-
118
- #### Annotation process
119
-
120
- [More Information Needed]
121
-
122
- #### Who are the annotators?
123
-
124
- [More Information Needed]
125
-
126
- ### Personal and Sensitive Information
127
-
128
- [More Information Needed]
129
-
130
- ## Considerations for Using the Data
131
-
132
- ### Social Impact of Dataset
133
-
134
- [More Information Needed]
135
-
136
- ### Discussion of Biases
137
-
138
- [More Information Needed]
139
-
140
- ### Other Known Limitations
141
-
142
- [More Information Needed]
143
-
144
- ## Additional Information
145
-
146
- ### Dataset Curators
147
-
148
- [More Information Needed]
149
-
150
- ### Licensing Information
151
-
152
- [More Information Needed]
153
-
154
- ### Citation Information
155
-
156
- J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
157
-
158
-
159
- ### Contributions
160
-
161
- Thanks to [@lhoestq](https://github.com/lhoestq), [@spatil6](https://github.com/spatil6) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"en-xh": {"description": "This dataset is designed for machine translation from English to Xhosa.", "citation": "J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)", "homepage": "http://opus.nlpl.eu/XhosaNavy-v1.php", "license": "", "features": {"translation": {"languages": ["en", "xh"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "opus_xhosanavy", "config_name": "en-xh", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 9654422, "num_examples": 49982, "dataset_name": "opus_xhosanavy"}}, "download_checksums": {"https://object.pouta.csc.fi/OPUS-XhosaNavy/v1/moses/en-xh.txt.zip": {"num_bytes": 3263865, "checksum": "30b079f60f9d0d51c1b6c09fe4d11fe7deeb24327f36c8e35247f383e3c6e19c"}}, "download_size": 3263865, "post_processing_size": null, "dataset_size": 9654422, "size_in_bytes": 12918287}}
 
 
en-xh/opus_xhosanavy-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20271ba3c9078c0a26cefee6d08a666cdef23b7c61ac127c9f94cb021290faf8
3
+ size 5552449
opus_xhosanavy.py DELETED
@@ -1,88 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """XhosaNavy English -Xhosa"""
16
-
17
-
18
- import os
19
-
20
- import datasets
21
-
22
-
23
- _CITATION = """\
24
- J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International \
25
- Conference on Language Resources and Evaluation (LREC 2012)"""
26
-
27
- _HOMEPAGE = "http://opus.nlpl.eu/XhosaNavy-v1.php"
28
-
29
-
30
- _LICENSE = ""
31
-
32
- _DESCRIPTION = """\
33
- This dataset is designed for machine translation from English to Xhosa."""
34
-
35
-
36
- _URLs = {"train": "https://object.pouta.csc.fi/OPUS-XhosaNavy/v1/moses/en-xh.txt.zip"}
37
-
38
-
39
- class OpusXhosanavy(datasets.GeneratorBasedBuilder):
40
-
41
- VERSION = datasets.Version("1.0.0")
42
-
43
- BUILDER_CONFIGS = [datasets.BuilderConfig(name="en-xh", version=VERSION, description="XhosaNavy English -Xhosa")]
44
-
45
- def _info(self):
46
- return datasets.DatasetInfo(
47
- description=_DESCRIPTION,
48
- features=datasets.Features(
49
- {"translation": datasets.features.Translation(languages=tuple(self.config.name.split("-")))}
50
- ),
51
- supervised_keys=None,
52
- homepage="http://opus.nlpl.eu/XhosaNavy-v1.php",
53
- citation=_CITATION,
54
- )
55
-
56
- def _split_generators(self, dl_manager):
57
- """Returns SplitGenerators."""
58
- data_dir = dl_manager.download_and_extract(_URLs)
59
- return [
60
- datasets.SplitGenerator(
61
- name=datasets.Split.TRAIN,
62
- # These kwargs will be passed to _generate_examples
63
- gen_kwargs={
64
- "source_file": os.path.join(data_dir["train"], "XhosaNavy.en-xh.en"),
65
- "target_file": os.path.join(data_dir["train"], "XhosaNavy.en-xh.xh"),
66
- "split": "train",
67
- },
68
- ),
69
- ]
70
-
71
- def _generate_examples(self, source_file, target_file, split):
72
- """This function returns the examples in the raw (text) form."""
73
- with open(source_file, encoding="utf-8") as f:
74
- source_sentences = f.read().split("\n")
75
- with open(target_file, encoding="utf-8") as f:
76
- target_sentences = f.read().split("\n")
77
-
78
- assert len(target_sentences) == len(source_sentences), "Sizes do not match: %d vs %d for %s vs %s." % (
79
- len(source_sentences),
80
- len(target_sentences),
81
- source_file,
82
- target_file,
83
- )
84
-
85
- source, target = tuple(self.config.name.split("-"))
86
- for idx, (l1, l2) in enumerate(zip(source_sentences, target_sentences)):
87
- result = {"translation": {source: l1, target: l2}}
88
- yield idx, result