Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
T-Rex-MC / README.md
QinyuanWu's picture
Update README.md
7f5049d verified
|
raw
history blame
6.25 kB
metadata
license: apache-2.0
task_categories:
  - question-answering
language:
  - en
size_categories:
  - 10K<n<100K
dataset_info:
  features:
    - name: Relation_ID
      dtype: int64
    - name: Relation Name
      dtype: string
    - name: Subject
      dtype: string
    - name: Object
      dtype: string
    - name: Multiple Choices
      dtype: string
    - name: Title
      dtype: string
    - name: Text
      dtype: string
  splits:
    - name: train
      num_bytes: 13358248
      num_examples: 5000
    - name: test
      num_bytes: 53621776
      num_examples: 20000
  download_size: 30826462
  dataset_size: 66980024
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*

Dataset Card for T-Rex-MC

Table of Contents

Dataset Description

Dataset Summary

The T-Rex-MC dataset is designed to assess models’ factual knowledge. It is derived from the T-REx dataset, focusing on relations with at least 500 samples connected to 100 or more unique objects. For each sample, a multiple-choice list is generated, ensuring that instances with multiple correct objects do not include any correct answers in the choices provided. This filtering process results in 50 distinct relations across various categories, such as birth dates, directorial roles, parental relationships, and educational backgrounds. The final T-Rex-MC dataset includes 5,000 training facts and 20,000 test facts.

Dataset Structure

Relation Map

The relation mapping in T-Rex-MC links each relation number to a specific Wikipedia property identifier and its descriptive name. This mapping is stored in the JSON file located at /dataset/trex_MC/relationID_names.json. Each entry in this file maps a relation ID to a relation name, e.g., “country,” “member of sports team”): This name provides a human-readable description of the Wikipedia property, explaining the type of relationship it represents.

This mapping is essential for interpreting the types of relationships tested in T-Rex-MC and facilitates understanding of each fact’s specific factual relationship.

Data Instances

Each instance in the T-Rex-MC dataset represents a single factual statement and is structured as follows:

  1. Relation ID
  2. Relation Name
  3. Subject: The main entity in the fact, which the factual relation is about. For example, if the fact is about a specific person’s birth date, the subject would be that person’s name.
  4. Object: The correct answer for the factual relation. This is the specific value tied to the subject in the context of the relation, such as the exact birth date, or the correct country of origin.
  5. Multiple Choices: A list of 99 potential answers. This includes:
    • 99 distractors that serve as plausible but incorrect choices, designed to challenge the model’s knowledge.
  6. Title: The title of the relevant Wikipedia page for the subject, which provides a direct reference to the fact’s source.
  7. Text: The Wikipedia abstract or introductory paragraph that corresponds to the subject, giving additional context and helping to clarify the nature of the relation.

Each data instance thus provides a structured test of a model’s ability to select the correct fact from a curated list of plausible alternatives, covering various factual relations between entities in Wikipedia.

Dataset Creation

Creation of Multiple Choices for T-REx-MC

The T-REx-MC dataset is built from T-REx, a large-scale alignment dataset linking Wikipedia abstracts with factual triples. For our experiments, we used the processed T-REx version on HuggingFace. Relations with over 500 facts and at least 100 unique objects were selected to ensure a pool of feasible multiple-choice options per fact. We manually filtered out ambiguous cases where multiple correct answers exist (e.g., “America,” “USA”) and standardized partial matches (e.g., “French” vs. “French language”) to maintain consistency.

Our curated T-REx-MC dataset includes 50 relations, each represented as a tuple of <subject, relation, multiple choices>. Each multiple-choice list includes the correct answer and 99 distractors. A detailed list of these 50 relations is available in Table 4.

Personal and Sensitive Information

The T-Rex-MC dataset does not contain any sensitive personal information. The facts within the dataset are derived from Wikipedia, which is publicly available, and they pertain to general knowledge rather than private or sensitive data. The content of the dataset includes:

  • Data about general factual relationships (e.g., countries, occupations, affiliations)
  • Data about places, cultural items, and general knowledge topics

Considerations for Using the Data

Licensing Information

While we release this dataset under Alpaca, please consider citing the accompanying paper if you use this dataset or any derivative of it.

Citation Information

BiBTeX:

@article{wu2024towards,
  title={Towards Reliable Latent Knowledge Estimation in LLMs: In-Context Learning vs. Prompting Based Factual Knowledge Extraction},
  author={Wu, Qinyuan and Khan, Mohammad Aflah and Das, Soumi and Nanda, Vedant and Ghosh, Bishwamittra and Kolling, Camila and Speicher, Till and Bindschaedler, Laurent and Gummadi, Krishna P and Terzi, Evimaria},
  journal={arXiv preprint arXiv:2404.12957},
  year={2024}
}