--- license: cc-by-nc-sa-4.0 task_categories: - question-answering language: - ar - asm - bn - en - hi - ne - tr tags: - question-answering - cultural-aligned pretty_name: 'MultiNativQA -- Multilingual Native and Culturally Aligned QA' size_categories: - 10K

This dataset consists of two types of data: annotated and un-annotated. We considered the un-annotated data as additional data. Please find the data statistics below: Statistics of our **MultiNativQA** dataset including languages with the final annotated QA pairs from different location. | Language | City | Train | Dev | Test | Total | |-------------|------------|---------|-------|--------|--------| | Arabic | Doha | 3,649 | 492 | 988 | 5,129 | | Assamese | Assam | 1,131 | 157 | 545 | 1,833 | | Bangla | Dhaka | 7,018 | 953 | 1,521 | 9,492 | | Bangla | Kolkata | 6,891 | 930 | 2,146 | 9,967 | | English | Dhaka | 4,761 | 656 | 1,113 | 6,530 | | English | Doha | 8,212 | 1,164 | 2,322 | 11,698 | | Hindi | Delhi | 9,288 | 1,286 | 2,745 | 13,319 | | Nepali | Kathmandu | -- | -- | 561 | 561 | | Turkish | Istanbul | 3,527 | 483 | 1,218 | 5,228 | | **Total** | | **44,477** | **6,121** | **13,159** | **63,757** | We provide the un-annotated additional data stats below: | Language-Location | # of QA | |-------------------------|---------------| | Arabic-Egypt | 7,956 | | Arabic-Palestine | 5,679 | | Arabic-Sudan | 4,718 | | Arabic-Syria | 11,288 | | Arabic-Tunisia | 14,789 | | Arabic-Yemen | 4,818 | | English-New York | 6,454 | | **Total** | **55,702** | ### How to download data ``` import os import json from datasets import load_dataset dataset_names = ['arabic_qa', 'assamese_in', 'bangla_bd', 'bangla_in', 'english_bd', 'english_qa', 'hindi_in', 'nepali_np', 'turkish_tr'] base_dir="./MNQA/" for dname in dataset_names: output_dir = os.path.join(base_dir, dname) # load each language dataset = load_dataset("QCRI/MultiNativQA", name=dname) # Save the dataset to the specified directory. This will save all splits to the output directory. dataset.save_to_disk(output_dir) # iterate over splits to save the data into json format for split in ['train','dev','test']: data = [] if split not in dataset: continue for idx, item in enumerate(dataset[split]): data.append(item) output_file = os.path.join(output_dir, f"{split}.json") with open(output_file, 'w', encoding='utf-8') as f: json.dump(data, f, ensure_ascii=False, indent=4) ``` ### License The dataset is distributed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). The full license text can be found in the accompanying licenses_by-nc-sa_4.0_legalcode.txt file. ### Contact & Additional Information For more details, please visit our [official website](http://nativqa.gitlab.io/). ### Citation You can access the full paper [here](https://arxiv.org/pdf/2407.09823). ``` @article{hasan2024nativqa, title={NativQA: Multilingual Culturally-Aligned Natural Query for LLMs}, author={Hasan, Md Arid and Hasanain, Maram and Ahmad, Fatema and Laskar, Sahinur Rahman and Upadhyay, Sunaya and Sukhadia, Vrunda N and Kutlu, Mucahid and Chowdhury, Shammur Absar and Alam, Firoj}, journal={arXiv preprint arXiv:2407.09823}, year={2024} publisher={arXiv:2407.09823}, url={https://arxiv.org/abs/2407.09823}, } ```