Datasets:
hfl
/

ArXiv:
License:
Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ValueError
Message:      Not able to read records in the JSON file at hf://datasets/hfl/expmrc@c7e2e10c421b2baa8c9a5609a14ff11e0436da00/train-pseudo-c3.json. You should probably indicate the field of the JSON file containing your records. This JSON file contain the following fields: ['version', 'data']. Select the correct one and provide it as `field='XXX'` to the dataset loading method. 
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 240, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2216, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 170, in _generate_tables
                  raise ValueError(
              ValueError: Not able to read records in the JSON file at hf://datasets/hfl/expmrc@c7e2e10c421b2baa8c9a5609a14ff11e0436da00/train-pseudo-c3.json. You should probably indicate the field of the JSON file containing your records. This JSON file contain the following fields: ['version', 'data']. Select the correct one and provide it as `field='XXX'` to the dataset loading method.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

GitHub repository: https://github.com/ymcui/expmrc

With the development of the pre-trained language models (PLMs), achieving human-level performance on several machine reading comprehension (MRC) dataset is not as hard as it used to be. However, the explainability behind these artifacts still remains unclear, raising concerns on utilizing these models in real-life applications. To improve the explainability of MRC tasks, we propose ExpMRC benchmark.

ExpMRC is a benchmark for Explainability Evaluation of Machine Reading Comprehension. ExpMRC contains four subsets of popular MRC datasets with additionally annotated evidences, including SQuAD, CMRC 2018, RACE+ (similar to RACE), and C3, covering span-extraction and multiple-choice questions MRC tasks in both English and Chinese.

To achieve a higher score in ExpMRC, the model should not only give a correct answer for the question but also give a passage span as the evidence text. We greatly welcome the submission that could be generalized well on different languages and types of MRC tasks with unsupervised or semi-supervised approaches.

ExpMRC: Explainability Evaluation for Machine Reading Comprehension

[Official Publication] [arXiv pre-print] [Leaderboard] [Papers With Code]

Submission to Leaderboard

Please visit our leaderboard for more information: https://ymcui.github.io/expmrc/

To preserve the integrity of test results and improve the reproducibility, we do not release the test sets to the public. Instead, we require you to upload your model onto CodaLab, so that we can run it on the test sets for you. You can follow the instructions on CodaLab (which is similar to SQuAD, CMRC 2018 submission). You can submit your model on one or more subsets in ExpMRC. Sample submission files are shown in sample_submission directory.

Submission policies:

  1. You are free to use any open-source MRC data or automatically generated data for training your systems (both labeled and unlabeled).
  2. You are NOT allowed to use any publicly unavailable human-annotated data for training.
  3. We do not encourage using the development set of ExpMRC for training (though it is not prohibited). You should declare whether the system is trained by using the whole/part of the development set. Such submissions will be marked with an asterisk (*).

Citation

If you are using our benchmark in your work, please cite:

@article{cui-etal-2022-expmrc,
  title={ExpMRC: Explainability Evaluation for Machine Reading Comprehension},
  author={Cui, Yiming and Liu, Ting and Che, Wanxiang and Chen, Zhigang and Wang, Shijin},
  journal={Heliyon},
  year={2022},
  volume={8},
  issue={4},
  pages={e09290},
  issn={2405-8440},
  doi={https://doi.org/10.1016/j.heliyon.2022.e09290}
}

Acknowledgment

Yiming Cui would like to thank Google TPU Research Cloud (TRC) program for providing computing resource. We also thank SQuAD team for open-sourcing their website template.

Contact us

Please submit an issue.

Downloads last month
49