Datasets:
Tasks:
Text Classification
Sub-tasks:
natural-language-inference
Languages:
Tagalog
Size:
100K<n<1M
ArXiv:
License:
Commit
•
f25be5d
0
Parent(s):
Update files from the datasets library (from 1.2.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.2.0
- .gitattributes +27 -0
- README.md +148 -0
- dataset_infos.json +1 -0
- dummy/1.0.0/dummy_data.zip +3 -0
- newsph_nli.py +107 -0
.gitattributes
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,148 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- machine-generated
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
+
languages:
|
7 |
+
- tl
|
8 |
+
licenses:
|
9 |
+
- unknown
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
size_categories:
|
13 |
+
- 100K<n<1M
|
14 |
+
source_datasets:
|
15 |
+
- original
|
16 |
+
task_categories:
|
17 |
+
- text-classification
|
18 |
+
task_ids:
|
19 |
+
- natural-language-inference
|
20 |
+
---
|
21 |
+
|
22 |
+
# Dataset Card for NewsPH NLI
|
23 |
+
|
24 |
+
## Table of Contents
|
25 |
+
- [Dataset Description](#dataset-description)
|
26 |
+
- [Dataset Summary](#dataset-summary)
|
27 |
+
- [Supported Tasks](#supported-tasks-and-leaderboards)
|
28 |
+
- [Languages](#languages)
|
29 |
+
- [Dataset Structure](#dataset-structure)
|
30 |
+
- [Data Instances](#data-instances)
|
31 |
+
- [Data Fields](#data-instances)
|
32 |
+
- [Data Splits](#data-instances)
|
33 |
+
- [Dataset Creation](#dataset-creation)
|
34 |
+
- [Curation Rationale](#curation-rationale)
|
35 |
+
- [Source Data](#source-data)
|
36 |
+
- [Annotations](#annotations)
|
37 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
38 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
39 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
40 |
+
- [Discussion of Biases](#discussion-of-biases)
|
41 |
+
- [Other Known Limitations](#other-known-limitations)
|
42 |
+
- [Additional Information](#additional-information)
|
43 |
+
- [Dataset Curators](#dataset-curators)
|
44 |
+
- [Licensing Information](#licensing-information)
|
45 |
+
- [Citation Information](#citation-information)
|
46 |
+
|
47 |
+
## Dataset Description
|
48 |
+
|
49 |
+
- **Homepage: [NewsPH NLI homepage](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)**
|
50 |
+
- **Repository: [NewsPH NLI repository](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)**
|
51 |
+
- **Paper: [Arxiv paper](https://arxiv.org/pdf/2010.11574.pdf)**
|
52 |
+
- **Leaderboard:**
|
53 |
+
- **Point of Contact: [Jan Christian Cruz](mailto:[email protected])**
|
54 |
+
|
55 |
+
### Dataset Summary
|
56 |
+
|
57 |
+
First benchmark dataset for sentence entailment in the low-resource Filipino language. Constructed through exploting the structure of news articles. Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing.
|
58 |
+
|
59 |
+
|
60 |
+
### Supported Tasks and Leaderboards
|
61 |
+
|
62 |
+
[More Information Needed]
|
63 |
+
|
64 |
+
### Languages
|
65 |
+
|
66 |
+
The dataset contains news articles in Filipino (Tagalog) scraped rom all major Philippine news sites online.
|
67 |
+
|
68 |
+
## Dataset Structure
|
69 |
+
|
70 |
+
### Data Instances
|
71 |
+
Sample data:
|
72 |
+
{
|
73 |
+
"premise": "Alam ba ninyo ang ginawa ni Erap na noon ay lasing na lasing na rin?",
|
74 |
+
"hypothesis": "Ininom niya ang alak na pinagpulbusan!",
|
75 |
+
"label": "0"
|
76 |
+
}
|
77 |
+
|
78 |
+
|
79 |
+
### Data Fields
|
80 |
+
|
81 |
+
[More Information Needed]
|
82 |
+
|
83 |
+
### Data Splits
|
84 |
+
Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing.
|
85 |
+
|
86 |
+
|
87 |
+
## Dataset Creation
|
88 |
+
|
89 |
+
### Curation Rationale
|
90 |
+
|
91 |
+
We propose the use of news articles for automatically creating benchmark datasets for NLI because of two reasons. First, news articles commonly use single-sentence paragraphing, meaning every paragraph in a news article is limited to a single sentence. Second, straight news articles follow the “inverted pyramid” structure, where every succeeding paragraph builds upon the premise of those that came before it, with the most important information on top and the least important towards the end.
|
92 |
+
|
93 |
+
### Source Data
|
94 |
+
|
95 |
+
#### Initial Data Collection and Normalization
|
96 |
+
|
97 |
+
To create the dataset, we scrape news articles from all major Philippine news sites online. We collect a total of 229,571 straight news articles, which we then lightly preprocess to remove extraneous unicode characters and correct minimal misspellings. No further preprocessing is done to preserve information in the data.
|
98 |
+
|
99 |
+
#### Who are the source language producers?
|
100 |
+
|
101 |
+
The dataset was created by Jan Christian, Blaise Cruz, Jose Kristian Resabal, James Lin, Dan John Velasco, and Charibeth Cheng from De La Salle University and the University of the Philippines
|
102 |
+
|
103 |
+
### Annotations
|
104 |
+
|
105 |
+
#### Annotation process
|
106 |
+
|
107 |
+
[More Information Needed]
|
108 |
+
|
109 |
+
#### Who are the annotators?
|
110 |
+
|
111 |
+
Jan Christian Blaise Cruz, Jose Kristian Resabal, James Lin, Dan John Velasco and Charibeth Cheng
|
112 |
+
|
113 |
+
### Personal and Sensitive Information
|
114 |
+
|
115 |
+
[More Information Needed]
|
116 |
+
|
117 |
+
## Considerations for Using the Data
|
118 |
+
|
119 |
+
### Social Impact of Dataset
|
120 |
+
|
121 |
+
[More Information Needed]
|
122 |
+
|
123 |
+
### Discussion of Biases
|
124 |
+
|
125 |
+
[More Information Needed]
|
126 |
+
|
127 |
+
### Other Known Limitations
|
128 |
+
|
129 |
+
[More Information Needed]
|
130 |
+
|
131 |
+
## Additional Information
|
132 |
+
|
133 |
+
### Dataset Curators
|
134 |
+
|
135 |
+
[Jan Christian Blaise Cruz] (mailto:[email protected])
|
136 |
+
|
137 |
+
### Licensing Information
|
138 |
+
|
139 |
+
[More Information Needed]
|
140 |
+
|
141 |
+
### Citation Information
|
142 |
+
|
143 |
+
@article{cruz2020investigating,
|
144 |
+
title={Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation},
|
145 |
+
author={Jan Christian Blaise Cruz and Jose Kristian Resabal and James Lin and Dan John Velasco and Charibeth Cheng},
|
146 |
+
journal={arXiv preprint arXiv:2010.11574},
|
147 |
+
year={2020}
|
148 |
+
}
|
dataset_infos.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"default": {"description": " First benchmark dataset for sentence entailment in the low-resource Filipino language. Constructed through exploting the structure of news articles. Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing.\n", "citation": " @article{cruz2020investigating,\n title={Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation},\n author={Jan Christian Blaise Cruz and Jose Kristian Resabal and James Lin and Dan John Velasco and Charibeth Cheng},\n journal={arXiv preprint arXiv:2010.11574},\n year={2020}\n }\n", "homepage": "https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 2, "names": ["0", "1"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "newsph_nli", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 154510599, "num_examples": 420000, "dataset_name": "newsph_nli"}, "test": {"name": "test", "num_bytes": 154510599, "num_examples": 420000, "dataset_name": "newsph_nli"}, "validation": {"name": "validation", "num_bytes": 33015530, "num_examples": 90000, "dataset_name": "newsph_nli"}}, "download_checksums": {"https://s3.us-east-2.amazonaws.com/blaisecruz.com/datasets/newsph/newsph-nli.zip": {"num_bytes": 76565287, "checksum": "544823dffe5b253718746ecc66d34116d918deb9886a58077447aeafe9538374"}}, "download_size": 76565287, "post_processing_size": null, "dataset_size": 342036728, "size_in_bytes": 418602015}}
|
dummy/1.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:22d200b608fe1135f543cf08097ffeed71538b2c125f478bb3c4d4fbdf25afa6
|
3 |
+
size 3286
|
newsph_nli.py
ADDED
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
"""NewsPH-NLI Sentence Entailment Dataset in Filipino"""
|
16 |
+
|
17 |
+
import csv
|
18 |
+
import os
|
19 |
+
|
20 |
+
import datasets
|
21 |
+
|
22 |
+
|
23 |
+
_DESCRIPTION = """\
|
24 |
+
First benchmark dataset for sentence entailment in the low-resource Filipino language. Constructed through exploting the structure of news articles. Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing.
|
25 |
+
"""
|
26 |
+
|
27 |
+
_CITATION = """\
|
28 |
+
@article{cruz2020investigating,
|
29 |
+
title={Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation},
|
30 |
+
author={Jan Christian Blaise Cruz and Jose Kristian Resabal and James Lin and Dan John Velasco and Charibeth Cheng},
|
31 |
+
journal={arXiv preprint arXiv:2010.11574},
|
32 |
+
year={2020}
|
33 |
+
}
|
34 |
+
"""
|
35 |
+
|
36 |
+
_HOMEPAGE = "https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks"
|
37 |
+
|
38 |
+
# TODO: Add the licence for the dataset here if you can find it
|
39 |
+
_LICENSE = ""
|
40 |
+
|
41 |
+
_URL = "https://s3.us-east-2.amazonaws.com/blaisecruz.com/datasets/newsph/newsph-nli.zip"
|
42 |
+
|
43 |
+
|
44 |
+
class NewsphNli(datasets.GeneratorBasedBuilder):
|
45 |
+
"""NewsPH-NLI Sentence Entailment Dataset in Filipino"""
|
46 |
+
|
47 |
+
VERSION = datasets.Version("1.0.0")
|
48 |
+
|
49 |
+
def _info(self):
|
50 |
+
features = datasets.Features(
|
51 |
+
{
|
52 |
+
"premise": datasets.Value("string"),
|
53 |
+
"hypothesis": datasets.Value("string"),
|
54 |
+
"label": datasets.features.ClassLabel(names=["0", "1"]),
|
55 |
+
}
|
56 |
+
)
|
57 |
+
return datasets.DatasetInfo(
|
58 |
+
description=_DESCRIPTION,
|
59 |
+
features=features,
|
60 |
+
supervised_keys=None,
|
61 |
+
homepage=_HOMEPAGE,
|
62 |
+
license=_LICENSE,
|
63 |
+
citation=_CITATION,
|
64 |
+
)
|
65 |
+
|
66 |
+
def _split_generators(self, dl_manager):
|
67 |
+
"""Returns SplitGenerators."""
|
68 |
+
data_dir = dl_manager.download_and_extract(_URL)
|
69 |
+
download_path = os.path.join(data_dir, "newsph-nli")
|
70 |
+
train_path = os.path.join(download_path, "train.csv")
|
71 |
+
test_path = os.path.join(download_path, "train.csv")
|
72 |
+
validation_path = os.path.join(download_path, "valid.csv")
|
73 |
+
|
74 |
+
return [
|
75 |
+
datasets.SplitGenerator(
|
76 |
+
name=datasets.Split.TRAIN,
|
77 |
+
gen_kwargs={
|
78 |
+
"filepath": train_path,
|
79 |
+
"split": "train",
|
80 |
+
},
|
81 |
+
),
|
82 |
+
datasets.SplitGenerator(
|
83 |
+
name=datasets.Split.TEST,
|
84 |
+
gen_kwargs={
|
85 |
+
"filepath": test_path,
|
86 |
+
"split": "test",
|
87 |
+
},
|
88 |
+
),
|
89 |
+
datasets.SplitGenerator(
|
90 |
+
name=datasets.Split.VALIDATION,
|
91 |
+
gen_kwargs={
|
92 |
+
"filepath": validation_path,
|
93 |
+
"split": "dev",
|
94 |
+
},
|
95 |
+
),
|
96 |
+
]
|
97 |
+
|
98 |
+
def _generate_examples(self, filepath, split):
|
99 |
+
""" Yields examples. """
|
100 |
+
with open(filepath, encoding="utf-8") as csv_file:
|
101 |
+
csv_reader = csv.reader(
|
102 |
+
csv_file, quotechar='"', delimiter=",", quoting=csv.QUOTE_ALL, skipinitialspace=True
|
103 |
+
)
|
104 |
+
next(csv_reader)
|
105 |
+
for id_, row in enumerate(csv_reader):
|
106 |
+
premise, hypothesis, label = row
|
107 |
+
yield id_, {"premise": premise, "hypothesis": hypothesis, "label": label}
|