HusnaManakkot commited on
Commit
4c1ba22
1 Parent(s): 16322a3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +137 -10
README.md CHANGED
@@ -6,22 +6,149 @@ language:
6
  - en
7
  tags:
8
  - text-to-sql
9
- pretty_name: new spider
10
  size_categories:
11
  - 1K<n<10K
12
  ---
13
- # Modify Spider Dataset
14
 
15
- This repository contains a Python script (`modify_spider.py`) that demonstrates how to add a new example to the Spider dataset using the Hugging Face `datasets` library.
16
 
17
- ## Prerequisites
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
- Before running the script, ensure that you have the following prerequisites installed:
20
 
21
- - Python 3.6 or later
22
- - Hugging Face `datasets` library
 
 
23
 
24
- You can install the required library using the following command:
25
 
26
- ```bash
27
- pip install datasets
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - en
7
  tags:
8
  - text-to-sql
9
+ pretty_name: spider
10
  size_categories:
11
  - 1K<n<10K
12
  ---
 
13
 
14
+ # Dataset Card for Spider
15
 
16
+ ## Table of Contents
17
+ - [Dataset Description](#dataset-description)
18
+ - [Dataset Summary](#dataset-summary)
19
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
20
+ - [Languages](#languages)
21
+ - [Dataset Structure](#dataset-structure)
22
+ - [Data Instances](#data-instances)
23
+ - [Data Fields](#data-fields)
24
+ - [Data Splits](#data-splits)
25
+ - [Dataset Creation](#dataset-creation)
26
+ - [Curation Rationale](#curation-rationale)
27
+ - [Source Data](#source-data)
28
+ - [Annotations](#annotations)
29
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
30
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
31
+ - [Social Impact of Dataset](#social-impact-of-dataset)
32
+ - [Discussion of Biases](#discussion-of-biases)
33
+ - [Other Known Limitations](#other-known-limitations)
34
+ - [Additional Information](#additional-information)
35
+ - [Dataset Curators](#dataset-curators)
36
+ - [Licensing Information](#licensing-information)
37
+ - [Citation Information](#citation-information)
38
+ - [Contributions](#contributions)
39
 
40
+ ## Dataset Description
41
 
42
+ - **Homepage:** https://yale-lily.github.io/spider
43
+ - **Repository:** https://github.com/taoyds/spider
44
+ - **Paper:** https://www.aclweb.org/anthology/D18-1425/
45
+ - **Point of Contact:** [Yale LILY](https://yale-lily.github.io/)
46
 
47
+ ### Dataset Summary
48
 
49
+ Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students
50
+ The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases
51
+
52
+ ### Supported Tasks and Leaderboards
53
+
54
+ The leaderboard can be seen at https://yale-lily.github.io/spider
55
+
56
+ ### Languages
57
+
58
+ The text in the dataset is in English.
59
+
60
+ ## Dataset Structure
61
+
62
+ ### Data Instances
63
+
64
+ **What do the instances that comprise the dataset represent?**
65
+
66
+ Each instance is natural language question and the equivalent SQL query
67
+
68
+ **How many instances are there in total?**
69
+
70
+ **What data does each instance consist of?**
71
+
72
+ [More Information Needed]
73
+
74
+ ### Data Fields
75
+
76
+ * **db_id**: Database name
77
+ * **question**: Natural language to interpret into SQL
78
+ * **query**: Target SQL query
79
+ * **query_toks**: List of tokens for the query
80
+ * **query_toks_no_value**: List of tokens for the query
81
+ * **question_toks**: List of tokens for the question
82
+
83
+ ### Data Splits
84
+
85
+ **train**: 7000 questions and SQL query pairs
86
+ **dev**: 1034 question and SQL query pairs
87
+
88
+ [More Information Needed]
89
+
90
+ ## Dataset Creation
91
+
92
+ ### Curation Rationale
93
+
94
+ [More Information Needed]
95
+
96
+ ### Source Data
97
+
98
+ #### Initial Data Collection and Normalization
99
+
100
+ #### Who are the source language producers?
101
+
102
+ [More Information Needed]
103
+
104
+ ### Annotations
105
+
106
+ The dataset was annotated by 11 college students at Yale University
107
+
108
+ #### Annotation process
109
+
110
+ #### Who are the annotators?
111
+
112
+ ### Personal and Sensitive Information
113
+
114
+ [More Information Needed]
115
+
116
+ ## Considerations for Using the Data
117
+
118
+ ### Social Impact of Dataset
119
+
120
+ ### Discussion of Biases
121
+
122
+ [More Information Needed]
123
+
124
+ ### Other Known Limitations
125
+
126
+ ## Additional Information
127
+
128
+ The listed authors in the homepage are maintaining/supporting the dataset.
129
+
130
+ ### Dataset Curators
131
+
132
+ [More Information Needed]
133
+
134
+ ### Licensing Information
135
+
136
+ The spider dataset is licensed under
137
+ the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
138
+
139
+ [More Information Needed]
140
+
141
+ ### Citation Information
142
+
143
+ ```
144
+ @article{yu2018spider,
145
+ title={Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task},
146
+ author={Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and others},
147
+ journal={arXiv preprint arXiv:1809.08887},
148
+ year={2018}
149
+ }
150
+ ```
151
+
152
+ ### Contributions
153
+
154
+ Thanks to [@olinguyen](https://github.com/olinguyen) for adding this dataset.