--- license: cc-by-sa-4.0 task_categories: - question-answering task_ids: - extractive-qa language: - kk source_datasets: - original - extended|natural_questions - extended|wikipedia pretty_name: KazQAD configs: - config_name: kazqad data_files: - split: train path: kazqad/kazqad-reading-comprehension-v1.0-kk-train.jsonl.gz - split: validation path: kazqad/kazqad-reading-comprehension-v1.0-kk-validation.jsonl.gz - split: test path: kazqad/kazqad-reading-comprehension-v1.0-kk-test.jsonl.gz default: true - config_name: nq-translate-kk data_files: nq-translate-kk/nq-reading-comprehension-translate-kk.jsonl.gz - config_name: nq-original-en data_files: nq-original-en/nq-reading-comprehension-original-en.jsonl.gz --- # Dataset Card for KazQAD ## Dataset Description - **Repository:** [https://github.com/IS2AI/KazQAD](https://github.com/IS2AI/KazQAD) - **Paper:** https://arxiv.org/abs/2404.04487 ### Dataset Summary **KazQAD** is a **Kaz**akh open-domain **Q**uestion **A**nswering **D**ataset that can be used in both reading comprehension and full ODQA settings, as well as for information retrieval experiments. This repository contains only the data for the reading comprehension task (extractive QA). Collection and relevance judgments for information retrieval can be found [here](https://huggingface.co/datasets/issai/kazqad-retrieval). The main dataset (subset `kazqad`) contains just under 6,000 unique questions with extracted short answers accompanied by passages from Kazakh Wikipedia. The questions come from two sources: translated items from the Natural Questions (NQ) dataset (only for training) and the original Kazakh Unified National Testing (UNT) exam (for development and testing). As supplementary resources we provide examples from the NQ dataset that are not represented in Kazakh Wikipedia (subset `nq-original-en`) and their machine translation into Kazakh (subset `nq-translate-kk`). These data were used as an additional training data for our baselines. ## Dataset Structure ### Data Instances An example of dataset looks as follows. ``` { "id": "lit0176lit#5075_4_1", "title": "Сара Тастанбекқызы", "context": "Осылайша қаршадайынан басы дау шарға түскен ақын қыздың бағының ашылуына осы кезде Найман елін аралап," \ "серілік жасап жүрген атақты Біржан салмен кездесіп, шаршы топтың алдына онымен айтысуы үлкен себепші болады." \ "Бұл айтыс Сараның халық алдындағы беделін арттырып, атағын алысқа жаяды...", "question": "Айтыс барысында әйел теңдігі мәселесін қандай айтыскерлер көтерген?", "answers": { "text": ["Біржан", "Сараның"], "answer_start": [131, 222] } } ``` ### Data Fields - `id`: a unique identifier for the question-passage pair, stored as a string. - `title`: a feature representing the title of the Wikipedia article from which the context passage was extracted, stored as a string. - `context`: a feature representing the segment of text extracted from the Wikipedia article, stored as a string. - `question`: the question asked, stored as a string. - `answers`: a dictionary containing the answer(s) to the question, with the following sub-features: - `text`: the answer text(s), stored as a list of strings. - `answer_start`: the position(s) in the context where the answer(s) start, stored as a list of integers. ### Data Splits | **subset** | **split** | **#examples** | **#questions** | **#passages** | **#answers** | |:-------------------:|:----------:|:-------------:|:--------------:|:-------------:|:------------:| | **kazqad** | train | 3,163 | 2,920 | 1,993 | 3,826 | | | validation | 764 | 545 | 697 | 910 | | | test | 2,713 | 1,927 | 2,137 | 3,315 | | **nq-original-en** | train | 68,770 | 68,770 | 51,414 | 78,862 | | **nq-translate-kk** | train | 61,606 | 61,198 | 53,204 | 71,242 | ## Citation Information ``` @article{kazqad, author = {Rustem Yeshpanov, Pavel Efimov, Leonid Boytsov, Ardak Shalkarbayuli and Pavel Braslavski}, title = {{KazQAD}: Kazakh Open-Domain Question Answering Dataset}, journal = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, pages = {9645--9656} year = 2024, } ```