This is the dataset card for BookSQL dataset; the details for the dataset can be found at: https://github.com/Exploration-Lab/BookSQL
The BookSQL dataset follows CC-BY-NC-SA license. Users can share and adapt our dataset if they give credit to us and do not use our dataset for any commercial purposes. In other words, the dataset can be used for research purposes only. Commercial usage is not allowed.
NOTE: We are not releasing the Gold SQL queries for the test set as we are maintaining a Leaderboard where a user can upload the predictions of their model and evaluate.
The paper associated with the dataset can be found here.
If you use the dataset in your research please cite the paper:
@inproceedings{kumar-etal-2024-booksql,
title = "BookSQL: A Large Scale Text-to-SQL Dataset for Accounting Domain",
author = "Kumar, Rahul and Raja, Amar and Harsola, Shrutendra and Subrahmaniam, Vignesh and Modi, Ashutosh",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "march",
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
abstract = "Several large-scale datasets (e.g., WikiSQL, Spider) for developing natural language interfaces to databases have recently been proposed. These datasets cover a wide breadth of domains but fall short on some essential domains, such as finance and accounting. Given that accounting databases are used worldwide, particularly by non-technical people, there is an imminent need to develop models that could help extract information from accounting databases via natural language queries. In this resource paper, we aim to fill this gap by proposing a new large-scale Text-to-SQL dataset for the accounting and financial domain: BookSQL. The dataset consists of 100k natural language queries-SQL pairs, and accounting databases of 1 million records. We experiment with and analyze existing state-of-the-art models (including GPT-4) for the Text-to-SQL task on BookSQL. We find significant performance gaps, thus pointing towards developing more focused models for this domain.",
}
- Downloads last month
- 78