Datasets:
sxu
/

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Tags:
legal
Libraries:
Datasets
pandas
License:
RaVE_emnlp23 / README.md
sxu's picture
Update README.md
dec9d84
---
license: afl-3.0
language:
- en
tags:
- legal
size_categories:
- n<1K
---
# Dataset Card for VECHR
### Dataset Summary
[From Dissonance to Insights: Dissecting Disagreements in Rationale Construction for Case Outcome Classification](https://arxiv.org/pdf/2310.11878.pdf)
In legal NLP, Case Outcome Classification (COC) must not only be accurate but also trustworthy and explainable. Existing work in explainable COC has been limited to annotations by a single expert. However, it is well-known that lawyers may disagree in their assessment of case facts. We hence collect a novel dataset RaVE: Rationale Variation in ECHR, which is obtained from two experts in the domain of international human rights law, for whom we observe weak agreement. We study their disagreements and build a two-level task-independent taxonomy, supplemented with COC-specific subcategories. To our knowledge, this is the first work in the legal NLP that focuses on human label variation. We quantitatively assess different taxonomy categories and find that disagreements mainly stem from underspecification of the legal context, which poses challenges given the typically limited granularity and noise in COC metadata. We further assess the explainablility of state-of-the-art COC models on RaVE and observe limited agreement between models and experts. Overall, our case study reveals hitherto underappreciated complexities in creating benchmark datasets in legal NLP that revolve around identifying aspects of a case’s facts supposedly relevant for its outcome.
### Languages
English
# Citation Information
@inproceedings{xu-etal-2023-dissonance,
title = "From Dissonance to Insights: Dissecting Disagreements in Rationale Construction for Case Outcome Classification",
author = "Xu, Shanshan and
T.y.s.s, Santosh and
Ichim, Oana and
Risini, Isabella and
Plank, Barbara and
Grabmair, Matthias",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.594",
doi = "10.18653/v1/2023.emnlp-main.594",
pages = "9558--9576",
abstract = "In legal NLP, Case Outcome Classification (COC) must not only be accurate but also trustworthy and explainable. Existing work in explainable COC has been limited to annotations by a single expert. However, it is well-known that lawyers may disagree in their assessment of case facts. We hence collect a novel dataset RaVE: Rationale Variation in ECHR, which is obtained from two experts in the domain of international human rights law, for whom we observe weak agreement. We study their disagreements and build a two-level task-independent taxonomy, supplemented with COC-specific subcategories. To our knowledge, this is the first work in the legal NLP that focuses on human label variation. We quantitatively assess different taxonomy categories and find that disagreements mainly stem from underspecification of the legal context, which poses challenges given the typically limited granularity and noise in COC metadata. We further assess the explainablility of state-of-the-art COC models on RaVE and observe limited agreement between models and experts. Overall, our case study reveals hitherto underappreciated complexities in creating benchmark datasets in legal NLP that revolve around identifying aspects of a case{'}s facts supposedly relevant for its outcome.",
}