File size: 5,781 Bytes
739402e a2aa78a 52cd3db a2aa78a f2c93e5 a2aa78a 52cd3db a2aa78a 52cd3db a2aa78a 52cd3db f2c93e5 a2aa78a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
---
license: cc-by-4.0
task_categories:
- question-answering
language:
- en
tags:
- science
- space
- astronautics
pretty_name: AstroMCQA
size_categories:
- n<1K
---
# AstroMCQA Dataset
## Purpose and scope
The primary purpose of AstroMCQA is for application developers in the domain of space engineering to be able to comparatively assess LLM performances on the specific task of multiple-choice question-answering
## Intended Usage
Comparative assessement of differents LLMs, Model evaluation, audit, and model selection. Assessment of different quantization levels, different prompting strategies, and assessing effectiveness of domain adaptation or domain-specific fine-tuning.
## Quickstart
- Explore the dataset here: https://huggingface.co/datasets/patrickfleith/Astro-mcqa/viewer/default/train
- Evaluate an LLM (Mistral-7b) on AstroMCQA on collab here:<a target="_blank" href="https://colab.research.google.com/github/patrickfleith/astro-llms-notebooks/blob/main/Evaluate_an_HuggingFace_LLM_on_a_Domain_Specific_Benchmark_Dataset.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
## What is AstroMCQA GOOD for?
What is AstroMCQA good for?
The primary purpose of AstroMCQA is for application developers in the domain of space mission design and operations to be able to address some questions such as: which LLM to use and how does it perform in the different subdomains? It enables to benchmark different models, different size, quantization methods, prompt engineering strategies, effectiveness of fine-tuning on the specific task of multiple-choice question-answering in space engineering.
## What is AstroMCQA NOT GOOD for?
It is not suitable for training / fine-tuning LLM due to the very limited size of the dataset even if it could be combined with other tasks and science dataset for meta-learning.
# DATASET DESCRIPTION
### Access
- Manual download from Hugging face hub: https://huggingface.co/datasets/patrickfleith/Astro-mcqa
- Or with python:
```python
from datasets import load_dataset
dataset = load_dataset("patrickfleith/Astro-mcqa")
```
### Structure
200 expert-created Multiple Choice Questions and Answers, one question per row in a comma separated file. Each instance is made of the following field (column):
- **question**: a string.
- **propositions**: a list of string. Each item in the list is one choice. At least one of the propositions correctly answer the question, but there can be multiple correct propositions. Even all propositions can be correct.
- **labels**: list of integer (0/1). Each element in the labels list correspond to proposition at the same position within the proposition list. A label of 0 means that the proposition is incorrect. A label of 1 means that the proposition is a correct choice to answer the question.
- **justification**: Optional string. An optional field which may provide a justification of the answer.
- **answerable**: A boolean, whether the question is answerable or not. At the moment, AstroMCQA only includes answerable questions.
- **uid**: A unique identifier for the MCQA instance. May be useful for traceability in further processing tasks.
### Metadata
Dataset is version controlled and commits history is available here: https://huggingface.co/datasets/patrickfleith/Astro-mcqa/commits/main
### Languages
All instances in the dataset are in english
### Size
200 expert-created Multiple Choice Questions and Answers
### Types of Questions
- Some questions request expected generic knowledge in the field of space science and engineering.
- Some questions require reasoning capabilities
- Some questions require mathematical operations since a numerical result is expected (exam-style questions)
### Topics Covered
Different subdomains of space engineering are covered, including propulsion, operations, human spaceflight, space environment and effects, space project lifecycle, communication and link analysis, and more.
# USAGE AND GUIDELINES
#### License
AstroMCQA © 2024 by Patrick Fleith is licensed under Creative Commons Attribution 4.0 International
#### Restrictions
No restriction. Please provide the correct attribution following the license terms.
#### Citation
P. Fleith, AstroMCQA – Astronautics multiple choice questions and answers benchmark dataset for domain of Space Mission Engineering for LLM Evaluation, (2024).
#### Update Frequency
May be updated based on feedbacks. If you want to become a contributor, let me know.
#### Have a feedback or spot an error?
Use the community discussion tab directly on the huggingface Astro-mcqa dataset page.
#### Contact Information
Reach me here on the community tab or on LinkedIn (Patrick Fleith) with a Note.
#### Current Limitations and future work
- Only 200 multiple choice questions and answers. This makes it useless for fine-tuning purpose, although it could be integrated as part of a larger pool of datasets compiled for a larger fine-tuning.
- While being a descent size enabling LLM evaluation, the space engineering expert time is scarce and expensive. On average it takes 8 minutes to create one MCQA example. Having more examples would be much better for robustness.
- The dataset might be biased toward the very low number of annotators.
- The dataset might be biased toward European Space Programs.
- The dataset might not cover all subsystems or subdomain of astronautics although we tried to do our best covering the annotator’s domains of expertise.
- No peer-reviewing. Ideally we would like to have a Quality Control process to ensure high quality, and correctness of each example in the dataset. Given the limited resources, this is not yet possible. Feel free to come and contribute if you feel that is an issue |