Datasets:

Modalities:
Text
Languages:
English
Libraries:
Datasets
License:
t_rex / README.md
asahi417's picture
fix readme
4a0dd1b
|
raw
history blame
11.8 kB
metadata
language:
  - en
license:
  - other
multilinguality:
  - monolingual
pretty_name: t_rex

Dataset Card for "relbert/t_rex"

Dataset Description

Dataset Summary

This is the T-REX dataset proposed in https://aclanthology.org/L18-1544/. The test split is universal across different version, which is manually checked by the author of relbert/t_rex, and the test split contains predicates that is not included in the train/validation split. The train/validation splits are created for each configuration by the ratio of 9:1. The number of triples in each split is summarized in the table below.

  • train/validation split
data number of triples (train) number of triples (validation) number of triples (all) number of unique predicates (train) number of unique predicates (validation) number of unique predicates (all) number of unique entities (train) number of unique entities (validation) number of unique entities (all)
filter_unified.min_entity_1_max_predicate_100 7,075 787 9,193 212 166 246 8,496 1,324 10,454
filter_unified.min_entity_1_max_predicate_50 4,131 459 5,304 212 156 246 5,111 790 6,212
filter_unified.min_entity_1_max_predicate_25 2,358 262 3,034 212 144 246 3,079 465 3,758
filter_unified.min_entity_1_max_predicate_10 1,134 127 1,465 210 94 246 1,587 233 1,939
filter_unified.min_entity_2_max_predicate_100 4,873 542 6,490 195 139 229 5,386 887 6,704
filter_unified.min_entity_2_max_predicate_50 3,002 334 3,930 193 139 229 3,457 575 4,240
filter_unified.min_entity_2_max_predicate_25 1,711 191 2,251 195 113 229 2,112 331 2,603
filter_unified.min_entity_2_max_predicate_10 858 96 1,146 194 81 229 1,149 177 1,446
filter_unified.min_entity_3_max_predicate_100 3,659 407 4,901 173 116 208 3,892 662 4,844
filter_unified.min_entity_3_max_predicate_50 2,336 260 3,102 174 115 208 2,616 447 3,240
filter_unified.min_entity_3_max_predicate_25 1,390 155 1,851 173 94 208 1,664 272 2,073
filter_unified.min_entity_3_max_predicate_10 689 77 937 171 59 208 922 135 1,159
filter_unified.min_entity_4_max_predicate_100 2,995 333 4,056 158 105 193 3,104 563 3,917
filter_unified.min_entity_4_max_predicate_50 1,989 222 2,645 157 102 193 2,225 375 2,734
filter_unified.min_entity_4_max_predicate_25 1,221 136 1,632 158 76 193 1,458 237 1,793
filter_unified.min_entity_4_max_predicate_10 603 68 829 157 52 193 797 126 1,018
  • test split
number of triples (test) number of unique predicates (test) number of unique entities (test)
122 34 188

Filtering to Remove Noise

We apply filtering to keep triples with alpha-numeric subject and object, as well as triples with at least either of subject or object is a named-entity. After the filtering, we manually remove too vague and noisy predicate, and unify same predicates with different names (see the annotation here).

Dataset raw filter filter_unified
Triples 941,663 583,333 432,795
Predicate 931 659 247
Entity 270,801 197,163 149,172

Filtering to Purify the Dataset

We reduce the size of the dataset by applying filtering based on the number of predicates and entities in the triples. We first remove triples that contain either of subject or object with the occurrence in the dataset that is lower than min entity. Then, we reduce the number triples in each predicate to be less than max predicate. If the number of triples in a predicate is higher than max predicate, we choose top-max predicate triples based on the frequency of the subject and the object, or random sampling.

  • distribution of entities
  • distribution of predicates

Dataset Structure

Data Instances

An example looks as follows.

{
    "object": "Persian",
    "subject": "Tajik",
    "title": "Tandoor bread",
    "text": "Tandoor bread (Arabic: \u062e\u0628\u0632 \u062a\u0646\u0648\u0631 khubz tannoor, Armenian: \u0569\u0578\u0576\u056b\u0580 \u0570\u0561\u0581 tonir hats, Azerbaijani: T\u0259ndir \u00e7\u00f6r\u0259yi, Georgian: \u10d7\u10dd\u10dc\u10d8\u10e1 \u10de\u10e3\u10e0\u10d8 tonis puri, Kazakh: \u0442\u0430\u043d\u0434\u044b\u0440 \u043d\u0430\u043d tandyr nan, Kyrgyz: \u0442\u0430\u043d\u0434\u044b\u0440 \u043d\u0430\u043d tandyr nan, Persian: \u0646\u0627\u0646 \u062a\u0646\u0648\u0631\u06cc nan-e-tanuri, Tajik: \u043d\u043e\u043d\u0438 \u0442\u0430\u043d\u0443\u0440\u0439 noni tanuri, Turkish: Tand\u0131r ekme\u011fi, Uyghur: ) is a type of leavened bread baked in a clay oven called a tandoor, similar to naan. In Pakistan, tandoor breads are popular especially in the Khyber Pakhtunkhwa and Punjab regions, where naan breads are baked in tandoor clay ovens fired by wood or charcoal. These tandoor-prepared naans are known as tandoori naan.",
    "predicate": "[Artifact] is a type of [Type]"
}

Reproduce the Dataset

git clone https://huggingface.co/datasets/relbert/t_rex
cd t_rex
mkdir data_raw
cd data_raw
cd data_raw
wget https://figshare.com/ndownloader/files/8760241
unzip 8760241
cd ../
python process.py
python unify_predicate.py
python filtering_purify.py
python create_split.py

Citation Information

@inproceedings{elsahar2018t,
  title={T-rex: A large scale alignment of natural language with knowledge base triples},
  author={Elsahar, Hady and Vougiouklis, Pavlos and Remaci, Arslen and Gravier, Christophe and Hare, Jonathon and Laforest, Frederique and Simperl, Elena},
  booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
  year={2018}
}