--- license: odc-by language: - en tags: - math - education --- # Dataset Card for MathFish Tasks This dataset is a derivative of [MathFish](https://huggingface.co/datasets/allenai/mathfish), where dev set examples are inserted into prompts for models to assess their abilities to verify and tag standards in math problems. See [MathFish](https://huggingface.co/datasets/allenai/mathfish) for more details on sources, creation, and uses of this data. This data can be used in conjunction with our model API wrapper included in this [Github repository](https://github.com/allenai/mathfish/tree/main). ## Dataset Details ### Dataset Description - **Curated by:** Lucy Li, Tal August, Rose E Wang, Luca Soldaini, Courtney Allison, Kyle Lo - **Funded by:** The Gates Foundation - **Language(s) (NLP):** English - **License:** ODC-By 1.0 ## Dataset Structure Files are named in the following manner: ``` data_{task format}-{mathfish data split}_{other parameters}_{prompt number}_{table format}.jsonl ``` Each line in a tagging file is formatted as the following: ``` { "id": unique instance ID "dataset": some grouping of instances within a given task format, "messages": [ { "role": "user", "prompt_template": "", "options": [ # a list of tagging options ], "problem_activity": "", }, { "role": "assistant", "response_template": "{option}", "response_format": "", # e.g. json or comma-separated list "correct_option_index": [ # integer indices here that correspond to "options" above ] } ] } ``` Each instance may also include keys indicating few-shot exemplars. Note that files labeled with `entailment` are inputs for the task we call "verification" in our paper. In verification files, the format is similar to tagging above, but instead of an `options` key, there is a `standards_description` key including a natural language description of a math standard, and the assistant's dictionary includes a yes/no entry for whether the given problem `aligns` with the described standard. ## Dataset Creation The prompts in this repository are filtered by testing 15 possible prompts from [this file](https://github.com/allenai/mathfish/blob/main/mathfish/datasets/prompts.json) across three models: Llama 2 70B, Mixtral 8x7B, and GPT-4-turbo. This repo includes each models' top three performing prompts on tagging and verification tasks, to facilitate reproducibility of the findings in our paper (link TBD). ## Citation ``` @misc{lucy2024evaluatinglanguagemodelmath, title={Evaluating Language Model Math Reasoning via Grounding in Educational Curricula}, author={Li Lucy and Tal August and Rose E. Wang and Luca Soldaini and Courtney Allison and Kyle Lo}, year={2024}, eprint={2408.04226}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2408.04226}, } ``` ## Dataset Card Contact kylel@allenai.org