|
--- |
|
language: |
|
- en |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: test |
|
path: data/test-* |
|
dataset_info: |
|
features: |
|
- name: choices |
|
sequence: string |
|
- name: input |
|
dtype: string |
|
- name: output |
|
dtype: string |
|
- name: dataset |
|
dtype: string |
|
- name: category |
|
dtype: string |
|
- name: prompt_template |
|
dtype: string |
|
- name: idx |
|
dtype: int64 |
|
splits: |
|
- name: train |
|
num_bytes: 196170501 |
|
num_examples: 304955 |
|
- name: test |
|
num_bytes: 20170043 |
|
num_examples: 17255 |
|
download_size: 88640049 |
|
dataset_size: 216340544 |
|
--- |
|
# Dataset Card for P3_0.5 |
|
|
|
## Dataset Structure |
|
|
|
### Data Instances |
|
|
|
An example of "train" looks as follows: |
|
```bash |
|
{ |
|
'choices': [ "Yes", "No" ], |
|
'input': "Given that No Weapons of Mass Destruction Found in Iraq Yet. Does it follow that Weapons of Mass Destruction Found in Iraq. Yes or no?", |
|
'label': "No", |
|
'dataset': "rte", |
|
'category': "nli", |
|
'prompt_template': "super_glue_rte_does_it_follow_that" |
|
} |
|
``` |
|
|
|
To check all the prompted examples, you can use the [Promptsource hosted tool](http://bigscience.huggingface.co/promptsource) and choose the `Prompted dataset viewer` mode in the left panel. |
|
|
|
|
|
### Data Fields |
|
|
|
The data fields are the same among all splits: |
|
- `choices`: the choices (in natural language) available to the model |
|
- `input`: the natural language input fed to the model |
|
- `label`: the natural language target that the model has to generate |
|
- `dataset`: the dataset that the data are from |
|
- `category`: the NLP task it belongs to |
|
- `prompt_template`: the prompt template used to form the input |
|
|
|
|
|
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |