Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -73,15 +73,7 @@ The T-Rex-MC dataset is designed to assess models’ factual knowledge. It is de
|
|
73 |
|
74 |
### Relation Map
|
75 |
|
76 |
-
The relation mapping in T-Rex-MC links each relation number to a specific Wikipedia property identifier and its descriptive name. This mapping is stored in the JSON file located at /dataset/trex_MC/
|
77 |
-
|
78 |
-
1. Wikipedia Property ID (e.g., P17, P54, etc.): This identifier corresponds to a specific property or attribute on Wikipedia, typically aligned with Wikidata’s property codes. Each property ID points to a standardized type of information.
|
79 |
-
2. Descriptive Name (e.g., “country,” “member of sports team”): This name provides a human-readable description of the Wikipedia property, explaining the type of relationship it represents.
|
80 |
-
|
81 |
-
Examples from the Mapping File:
|
82 |
-
|
83 |
-
- "1": ["P17", "country"]: Relation ID 1 corresponds to the Wikipedia property P17, which represents the “country” associated with the subject.
|
84 |
-
- "2": ["P54", "member of sports team"]: Relation ID 2 links to P54, indicating the “member of sports team” relationship.
|
85 |
|
86 |
This mapping is essential for interpreting the types of relationships tested in T-Rex-MC and facilitates understanding of each fact’s specific factual relationship.
|
87 |
|
@@ -89,12 +81,14 @@ This mapping is essential for interpreting the types of relationships tested in
|
|
89 |
|
90 |
Each instance in the T-Rex-MC dataset represents a single factual statement and is structured as follows:
|
91 |
|
92 |
-
1.
|
93 |
-
2.
|
94 |
-
|
|
|
|
|
95 |
- 99 distractors that serve as plausible but incorrect choices, designed to challenge the model’s knowledge.
|
96 |
-
|
97 |
-
|
98 |
|
99 |
Each data instance thus provides a structured test of a model’s ability to select the correct fact from a curated list of plausible alternatives, covering various factual relations between entities in Wikipedia.
|
100 |
|
@@ -106,13 +100,6 @@ The T-REx-MC dataset is built from T-REx, a large-scale alignment dataset linkin
|
|
106 |
|
107 |
Our curated T-REx-MC dataset includes 50 relations, each represented as a tuple of <subject, relation, multiple choices>. Each multiple-choice list includes the correct answer and 99 distractors. A detailed list of these 50 relations is available in Table 4.
|
108 |
|
109 |
-
Attributes in T-REx-MC for Each Relation:
|
110 |
-
|
111 |
-
- Subject: The entity at the center of each fact.
|
112 |
-
- Object: The correct answer for each fact.
|
113 |
-
- Multiple Choices: A list of 100 options, including the correct answer and 99 alternatives.
|
114 |
-
- Title: The Wikipedia title associated with each fact.
|
115 |
-
|
116 |
### Personal and Sensitive Information
|
117 |
The T-Rex-MC dataset does not contain any sensitive personal information. The facts within the dataset are derived from Wikipedia, which is publicly available, and they pertain to general knowledge rather than private or sensitive data. The content of the dataset includes:
|
118 |
|
|
|
73 |
|
74 |
### Relation Map
|
75 |
|
76 |
+
The relation mapping in T-Rex-MC links each relation number to a specific Wikipedia property identifier and its descriptive name. This mapping is stored in the JSON file located at /dataset/trex_MC/relationID_names.json. Each entry in this file maps a relation ID to a relation name, e.g., “country,” “member of sports team”): This name provides a human-readable description of the Wikipedia property, explaining the type of relationship it represents.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
|
78 |
This mapping is essential for interpreting the types of relationships tested in T-Rex-MC and facilitates understanding of each fact’s specific factual relationship.
|
79 |
|
|
|
81 |
|
82 |
Each instance in the T-Rex-MC dataset represents a single factual statement and is structured as follows:
|
83 |
|
84 |
+
1. Relation ID
|
85 |
+
2. Relation Name
|
86 |
+
3. Subject: The main entity in the fact, which the factual relation is about. For example, if the fact is about a specific person’s birth date, the subject would be that person’s name.
|
87 |
+
4. Object: The correct answer for the factual relation. This is the specific value tied to the subject in the context of the relation, such as the exact birth date, or the correct country of origin.
|
88 |
+
5. Multiple Choices: A list of 99 potential answers. This includes:
|
89 |
- 99 distractors that serve as plausible but incorrect choices, designed to challenge the model’s knowledge.
|
90 |
+
6. Title: The title of the relevant Wikipedia page for the subject, which provides a direct reference to the fact’s source.
|
91 |
+
7. Text: The Wikipedia abstract or introductory paragraph that corresponds to the subject, giving additional context and helping to clarify the nature of the relation.
|
92 |
|
93 |
Each data instance thus provides a structured test of a model’s ability to select the correct fact from a curated list of plausible alternatives, covering various factual relations between entities in Wikipedia.
|
94 |
|
|
|
100 |
|
101 |
Our curated T-REx-MC dataset includes 50 relations, each represented as a tuple of <subject, relation, multiple choices>. Each multiple-choice list includes the correct answer and 99 distractors. A detailed list of these 50 relations is available in Table 4.
|
102 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
103 |
### Personal and Sensitive Information
|
104 |
The T-Rex-MC dataset does not contain any sensitive personal information. The facts within the dataset are derived from Wikipedia, which is publicly available, and they pertain to general knowledge rather than private or sensitive data. The content of the dataset includes:
|
105 |
|