Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
spider / README.md
albertvillanova's picture
Update metadata
e7d2f61 verified
|
raw
history blame
5.51 kB
metadata
annotations_creators:
  - expert-generated
language_creators:
  - expert-generated
  - machine-generated
language:
  - en
license:
  - cc-by-sa-4.0
multilinguality:
  - monolingual
size_categories:
  - 1K<n<10K
source_datasets:
  - original
task_categories:
  - text2text-generation
task_ids: []
paperswithcode_id: spider-1
pretty_name: Spider
tags:
  - text-to-sql
dataset_info:
  config_name: spider
  features:
    - name: db_id
      dtype: string
    - name: query
      dtype: string
    - name: question
      dtype: string
    - name: query_toks
      sequence: string
    - name: query_toks_no_value
      sequence: string
    - name: question_toks
      sequence: string
  splits:
    - name: train
      num_bytes: 4743786
      num_examples: 7000
    - name: validation
      num_bytes: 682090
      num_examples: 1034
  download_size: 957246
  dataset_size: 5425876
configs:
  - config_name: spider
    data_files:
      - split: train
        path: spider/train-*
      - split: validation
        path: spider/validation-*
    default: true

Dataset Card for Spider

Table of Contents

Dataset Description

Dataset Summary

Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students. The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.

Supported Tasks and Leaderboards

The leaderboard can be seen at https://yale-lily.github.io/spider

Languages

The text in the dataset is in English.

Dataset Structure

Data Instances

What do the instances that comprise the dataset represent?

Each instance is natural language question and the equivalent SQL query

How many instances are there in total?

What data does each instance consist of?

[More Information Needed]

Data Fields

  • db_id: Database name
  • question: Natural language to interpret into SQL
  • query: Target SQL query
  • query_toks: List of tokens for the query
  • query_toks_no_value: List of tokens for the query
  • question_toks: List of tokens for the question

Data Splits

train: 7000 questions and SQL query pairs dev: 1034 question and SQL query pairs

[More Information Needed]

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

Who are the source language producers?

[More Information Needed]

Annotations

The dataset was annotated by 11 college students at Yale University

Annotation process

Who are the annotators?

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

Discussion of Biases

[More Information Needed]

Other Known Limitations

Additional Information

The listed authors in the homepage are maintaining/supporting the dataset.

Dataset Curators

[More Information Needed]

Licensing Information

The spider dataset is licensed under the CC BY-SA 4.0

[More Information Needed]

Citation Information

@inproceedings{yu-etal-2018-spider,
    title = "{S}pider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-{SQL} Task",
    author = "Yu, Tao  and
      Zhang, Rui  and
      Yang, Kai  and
      Yasunaga, Michihiro  and
      Wang, Dongxu  and
      Li, Zifan  and
      Ma, James  and
      Li, Irene  and
      Yao, Qingning  and
      Roman, Shanelle  and
      Zhang, Zilin  and
      Radev, Dragomir",
    editor = "Riloff, Ellen  and
      Chiang, David  and
      Hockenmaier, Julia  and
      Tsujii, Jun{'}ichi",
    booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
    month = oct # "-" # nov,
    year = "2018",
    address = "Brussels, Belgium",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D18-1425",
    doi = "10.18653/v1/D18-1425",
    pages = "3911--3921",
    archivePrefix={arXiv},
    eprint={1809.08887},
    primaryClass={cs.CL},
}

Contributions

Thanks to @olinguyen for adding this dataset.