patrickfleith
commited on
Commit
•
f2c93e5
1
Parent(s):
a2aa78a
Update README.md
Browse files
README.md
CHANGED
@@ -26,8 +26,7 @@ Comparative assessement of differents LLMs, Model evaluation, audit, and model s
|
|
26 |
## Quickstart
|
27 |
|
28 |
- Explore the dataset here: https://huggingface.co/datasets/patrickfleith/Astro-mcqa/viewer/default/train
|
29 |
-
- Evaluate an LLM (Mistral-7b) on AstroMCQA on collab here
|
30 |
-
<a target="_blank" href="https://colab.research.google.com/github/patrickfleith/astro-llms-notebooks/blob/main/Evaluate_an_HuggingFace_LLM_on_a_Domain_Specific_Benchmark_Dataset.ipynb">
|
31 |
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
32 |
</a>
|
33 |
|
@@ -40,11 +39,8 @@ The primary purpose of AstroMCQA is for application developers in the domain of
|
|
40 |
|
41 |
It is not suitable for training / fine-tuning LLM due to the very limited size of the dataset even if it could be combined with other tasks and science dataset for meta-learning.
|
42 |
|
43 |
-
|
44 |
# DATASET DESCRIPTION
|
45 |
-
|
46 |
### Access
|
47 |
-
|
48 |
- Manual download from Hugging face hub: https://huggingface.co/datasets/patrickfleith/Astro-mcqa
|
49 |
- Or with python:
|
50 |
```python
|
@@ -71,7 +67,6 @@ All instances in the dataset are in english
|
|
71 |
200 expert-created Multiple Choice Questions and Answers
|
72 |
|
73 |
### Types of Questions
|
74 |
-
|
75 |
- Some questions request expected generic knowledge in the field of space science and engineering.
|
76 |
- Some questions require reasoning capabilities
|
77 |
- Some questions require mathematical operations since a numerical result is expected (exam-style questions)
|
@@ -79,14 +74,13 @@ All instances in the dataset are in english
|
|
79 |
### Topics Covered
|
80 |
Different subdomains of space engineering are covered, including propulsion, operations, human spaceflight, space environment and effects, space project lifecycle, communication and link analysis, and more.
|
81 |
|
|
|
|
|
|
|
82 |
# USAGE AND GUIDELINES
|
83 |
-
|
84 |
#### Restrictions
|
85 |
No restriction. Please provide the correct attribution following the license terms.
|
86 |
|
87 |
-
#### License
|
88 |
-
AstroMCQA © 2024 by Patrick Fleith is licensed under Creative Commons Attribution 4.0 International
|
89 |
-
|
90 |
#### Citation
|
91 |
P. Fleith, AstroMCQA – Astronautics multiple choice questions and answers benchmark dataset for domain of Space Mission Engineering for LLM Evaluation, (2024).
|
92 |
|
@@ -99,10 +93,6 @@ Use the community discussion tab directly on the huggingface Astro-mcqa dataset
|
|
99 |
#### Contact Information
|
100 |
Reach me here on the community tab or on LinkedIn (Patrick Fleith) with a Note.
|
101 |
|
102 |
-
#### License
|
103 |
-
|
104 |
-
AstroMCQA © 2024 by Patrick Fleith is licensed under Creative Commons Attribution 4.0 International
|
105 |
-
|
106 |
#### Current Limitations and future work
|
107 |
- Only 200 multiple choice questions and answers. This makes it useless for fine-tuning purpose, although it could be integrated as part of a larger pool of datasets compiled for a larger fine-tuning.
|
108 |
- While being a descent size enabling LLM evaluation, the space engineering expert time is scarce and expensive. On average it takes 8 minutes to create one MCQA example. Having more examples would be much better for robustness.
|
|
|
26 |
## Quickstart
|
27 |
|
28 |
- Explore the dataset here: https://huggingface.co/datasets/patrickfleith/Astro-mcqa/viewer/default/train
|
29 |
+
- Evaluate an LLM (Mistral-7b) on AstroMCQA on collab here:<a target="_blank" href="https://colab.research.google.com/github/patrickfleith/astro-llms-notebooks/blob/main/Evaluate_an_HuggingFace_LLM_on_a_Domain_Specific_Benchmark_Dataset.ipynb">
|
|
|
30 |
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
31 |
</a>
|
32 |
|
|
|
39 |
|
40 |
It is not suitable for training / fine-tuning LLM due to the very limited size of the dataset even if it could be combined with other tasks and science dataset for meta-learning.
|
41 |
|
|
|
42 |
# DATASET DESCRIPTION
|
|
|
43 |
### Access
|
|
|
44 |
- Manual download from Hugging face hub: https://huggingface.co/datasets/patrickfleith/Astro-mcqa
|
45 |
- Or with python:
|
46 |
```python
|
|
|
67 |
200 expert-created Multiple Choice Questions and Answers
|
68 |
|
69 |
### Types of Questions
|
|
|
70 |
- Some questions request expected generic knowledge in the field of space science and engineering.
|
71 |
- Some questions require reasoning capabilities
|
72 |
- Some questions require mathematical operations since a numerical result is expected (exam-style questions)
|
|
|
74 |
### Topics Covered
|
75 |
Different subdomains of space engineering are covered, including propulsion, operations, human spaceflight, space environment and effects, space project lifecycle, communication and link analysis, and more.
|
76 |
|
77 |
+
# LICENSE
|
78 |
+
AstroMCQA © 2024 by Patrick Fleith is licensed under Creative Commons Attribution 4.0 International
|
79 |
+
|
80 |
# USAGE AND GUIDELINES
|
|
|
81 |
#### Restrictions
|
82 |
No restriction. Please provide the correct attribution following the license terms.
|
83 |
|
|
|
|
|
|
|
84 |
#### Citation
|
85 |
P. Fleith, AstroMCQA – Astronautics multiple choice questions and answers benchmark dataset for domain of Space Mission Engineering for LLM Evaluation, (2024).
|
86 |
|
|
|
93 |
#### Contact Information
|
94 |
Reach me here on the community tab or on LinkedIn (Patrick Fleith) with a Note.
|
95 |
|
|
|
|
|
|
|
|
|
96 |
#### Current Limitations and future work
|
97 |
- Only 200 multiple choice questions and answers. This makes it useless for fine-tuning purpose, although it could be integrated as part of a larger pool of datasets compiled for a larger fine-tuning.
|
98 |
- While being a descent size enabling LLM evaluation, the space engineering expert time is scarce and expensive. On average it takes 8 minutes to create one MCQA example. Having more examples would be much better for robustness.
|