Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 4,840 Bytes
89a0cd8
 
 
 
 
 
177e69f
89a0cd8
177e69f
7273639
89a0cd8
 
 
 
 
 
 
8f31431
875f956
2a9654a
35a9544
875f956
 
6232cc3
3374e30
6232cc3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3374e30
6232cc3
3374e30
 
 
 
 
 
 
 
89a0cd8
 
 
35a9544
89a0cd8
 
 
 
2a9654a
89a0cd8
 
 
2a9654a
 
89a0cd8
 
 
 
 
 
 
 
 
 
 
 
 
15d5d5b
89a0cd8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
edfa5a1
 
89a0cd8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15d5d5b
 
 
875f956
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
- machine-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: spider-1
pretty_name: Spider
tags:
- text-to-sql
dataset_info:
  config_name: spider
  features:
  - name: db_id
    dtype: string
  - name: query
    dtype: string
  - name: question
    dtype: string
  - name: query_toks
    sequence: string
  - name: query_toks_no_value
    sequence: string
  - name: question_toks
    sequence: string
  splits:
  - name: train
    num_bytes: 4743786
    num_examples: 7000
  - name: validation
    num_bytes: 682090
    num_examples: 1034
  download_size: 957246
  dataset_size: 5425876
configs:
- config_name: spider
  data_files:
  - split: train
    path: spider/train-*
  - split: validation
    path: spider/validation-*
  default: true
---


# Dataset Card for Spider

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** https://yale-lily.github.io/spider
- **Repository:** https://github.com/taoyds/spider
- **Paper:** https://www.aclweb.org/anthology/D18-1425/
- **Point of Contact:** [Yale LILY](https://yale-lily.github.io/)

### Dataset Summary

Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students
The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases

### Supported Tasks and Leaderboards

The leaderboard can be seen at https://yale-lily.github.io/spider

### Languages

The text in the dataset is in English.

## Dataset Structure

### Data Instances

**What do the instances that comprise the dataset represent?**

Each instance is natural language question and the equivalent SQL query

**How many instances are there in total?**

**What data does each instance consist of?**

[More Information Needed]

### Data Fields

* **db_id**: Database name
* **question**: Natural language to interpret into SQL
* **query**: Target SQL query
* **query_toks**: List of tokens for the query
* **query_toks_no_value**: List of tokens for the query
* **question_toks**: List of tokens for the question

### Data Splits

**train**: 7000 questions and SQL query pairs
**dev**: 1034 question and SQL query pairs

[More Information Needed]

## Dataset Creation

### Curation Rationale

[More Information Needed]

### Source Data

#### Initial Data Collection and Normalization

#### Who are the source language producers?

[More Information Needed]

### Annotations

The dataset was annotated by 11 college students at Yale University

#### Annotation process

#### Who are the annotators?

### Personal and Sensitive Information

[More Information Needed]

## Considerations for Using the Data

### Social Impact of Dataset

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

## Additional Information

The listed authors in the homepage are maintaining/supporting the dataset. 

### Dataset Curators

[More Information Needed]

### Licensing Information

The spider dataset is licensed under 
the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)

[More Information Needed]

### Citation Information

```
@article{yu2018spider,
  title={Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task},
  author={Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and others},
  journal={arXiv preprint arXiv:1809.08887},
  year={2018}
}
```

### Contributions

Thanks to [@olinguyen](https://github.com/olinguyen) for adding this dataset.