CoT-Collection / README.md
jeggers's picture
Upload README.md with huggingface_hub
de084e0 verified
---
{}
---
---
## Dataset Card: CoT-Collection
This dataset encompasses a variety of tasks aimed at necessitating reasoning and diverse Chain-of-Thought strategies. While some tasks are intentionally simple and require minimal reasoning, others are more complex.
*Note* that Big Bench Hard includes 27 different tasks.
*Also Note* that the test set has a significantly different task distribution compared to the training set.
The dataset includes GPQA, which should not be published in plain text online.
Designed for reinforcement learning, this dataset lacks Chain-of-Thought annotations for the problems. It only provides questions and answers. However, a subset with automated Chain-of-Thought annotations is available here: [jeggers/CoT-Collection-Rationales](https://huggingface.co/datasets/jeggers/CoT-Collection-Rationales).
The dataset consists of short questions, each with a maximum of 200 tokens for the question and correct answer (using the llama-2 tokenizer).
This dataset is divided into four splits:
- `train`: The training set.
- `test_in_dist`: The test set with the same task distribution as the training set.
- `finetune`: Intended for finetuning, with the same distribution as `train`.
- `test_out_dist`: The test set with a different task distribution than the training set.
The Code used to create this dataset can be found here: [dataset-builder](https://github.com/Jorineg/dataset-builder).
### Statistics for the training set (62000 samples)
![train_dataset](train_dataset.png)
| Dataset | Proportion | Absolute | Random correct accuracy |
|:-------------------------------|:-------------|:-----------|:--------------------------|
| **Total** | **100.00%** | **61972** | **19.23%** |
| big bench hard | 6.65% | 4123 | 15.82% |
| MathQA | 5.44% | 3369 | 22.25% |
| MATH | 5.43% | 3368 | 4.29% |
| crosswords | 5.43% | 3368 | 0.12% |
| ape210k | 5.43% | 3368 | 2.08% |
| MMLU-Pro | 5.43% | 3368 | 11.68% |
| DMath | 5.43% | 3368 | 4.12% |
| SuperGLUE rte | 2.71% | 1682 | 50.16% |
| SuperGLUE wic | 2.71% | 1682 | 50.00% |
| SuperGLUE boolq | 2.71% | 1682 | 62.31% |
| strategy-qa | 2.71% | 1681 | 53.23% |
| ARC challenge | 2.71% | 1681 | 25.79% |
| jeopardy | 2.71% | 1681 | 0.11% |
| hellaswag | 2.71% | 1681 | 25.06% |
| winogrande | 2.71% | 1681 | 50.01% |
| riddle sense | 2.71% | 1681 | 20.64% |
| com2sense | 2.71% | 1681 | 50.00% |
| ANLI | 2.71% | 1681 | 40.51% |
| reverse words | 2.71% | 1681 | 0.01% |
| ARC easy | 2.71% | 1681 | 24.86% |
| drop single number | 2.71% | 1681 | 12.50% |
| logiqa2 | 2.71% | 1681 | 26.70% |
| hotpot qa hard | 2.71% | 1681 | 3.24% |
| asdiv | 2.58% | 1598 | 3.78% |
| reversal curse | 1.89% | 1172 | 0.26% |
| count chars in scrambled words | 1.35% | 838 | 16.68% |
| count words in paragraph | 1.35% | 838 | 2.08% |
| count chars in paragraph | 1.35% | 838 | 0.96% |
| count chars in english words | 1.35% | 838 | 16.68% |
| reclor | 1.35% | 834 | 25.28% |
| svamp | 1.25% | 774 | 7.70% |
| AIME | 1.06% | 654 | 0.96% |
| truthful qa | 1.01% | 624 | 22.40% |
| SuperGLUE WSC | 0.69% | 429 | 53.25% |
| LSAT-LR | 0.53% | 327 | 21.35% |
| SuperGLUE copa | 0.50% | 309 | 51.25% |
| SAT Analogies | 0.47% | 289 | 24.40% |
| SuperGLUE cb | 0.28% | 172 | 47.60% |
| LSAT-AR | 0.23% | 142 | 20.53% |
| RACE high school | 0.10% | 62 | 27.02% |
| SuperGLUE multirc | 0.00% | 3 | 55.86% |
### Statistics for the out of distribution test set (2731 samples)
![test_dataset](test_dataset.png)
| Dataset | Proportion | Absolute | Random correct accuracy |
|:-------------------|:-------------|:-----------|:--------------------------|
| **Total** | **100.00%** | **2731** | **14.00%** |
| gsm8k | 36.58% | 999 | 3.20% |
| last letter concat | 18.31% | 500 | 0.60% |
| codah | 18.31% | 500 | 26.80% |
| coin flip | 9.15% | 250 | 52.40% |
| AIW | 7.32% | 200 | 22.00% |
| AIW+ | 7.03% | 192 | 6.25% |
| gpqa diamond | 3.30% | 90 | 29.29% |