The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
Dataset Card for MultiLegalNeg
Dataset Summary
This dataset consists of German, French, and Italian court documents annotated for negation cues and negation scopes. It also includes a reformated version of ConanDoyle-neg ( Morante and Blanco. 2012), SFU Review (Konstantinova et al. 2012), BioScope (Szarvas et al. 2008) and Dalloux (Dalloux et al. 2020).
Languages
Language | Subset | Number of sentences | Negated sentences |
---|---|---|---|
French | fr | 1059 | 382 |
Italian | it | 1001 | 418 |
German(Germany) | de(DE) | 1068 | 1098 |
German (Switzerland) | de(CH) | 206 | 208 |
English | SFU Review | 17672 | 3528 |
English | BioScope | 14700 | 2095 |
English | ConanDoyle-neg | 5714 | 5714 |
French | Dalloux | 11032 | 1817 |
Dataset Structure
Data Fields
- text (string): full sentence
- spans (list): list of annotated cues and scopes
- start (int): offset of the beginning of the annotation
- end (int): offset of the end of the annotation
- token_start(int): id of the first token in the annotation
- token_end(int): id of the last token in the annotation
- label (string): CUE or SCOPE
- tokens (list): list of tokens in the sentence
- text (string): token text
- start (int): offset of the first character
- end (int): offset of the last character
- id (int): token id
- ws (boolean): indicates if the token is followed by a white space
Data Splits
For each subset a train (70%), test (20%), and validation (10%) split is available.
How to use this dataset
To load all data use 'all_all'
, or specify which dataset to load as the second argument. The available configurations are
'de', 'fr', 'it', 'swiss', 'fr_dalloux', 'fr_all', 'en_bioscope', 'en_sherlock', 'en_sfu', 'en_all', 'all_all'
from datasets import load_dataset
dataset = load_dataset("rcds/MultiLegalNeg", "all_all")
dataset
DatasetDict({
train: Dataset({
features: ['text', 'spans', 'tokens'],
num_rows: 26440
})
test: Dataset({
features: ['text', 'spans', 'tokens'],
num_rows: 7593
})
validation: Dataset({
features: ['text', 'spans', 'tokens'],
num_rows: 4053
})
})
Source Data
Subset | Source |
---|---|
fr | Niklaus et al. 2021, Niklaus et al. 2023 |
it | Niklaus et al. 2021, Niklaus et al. 2023 |
de(DE) | Glaser et al. 2021 |
de(CH) | Niklaus et al. 2021 |
SFU Review | Konstantinova et al. 2012 |
BioScope | Szarvas et al. 2008 |
ConanDoyle-neg | Morante and Blanco. 2012 |
Dalloux | Dalloux et al. 2020 |
Annotations
The data is annotated for negation cues and their scopes. Annotation guidelines are available here
Annotation process
Each language was annotated by one native speaking annotator and follows strict annotation guidelines
Citation Information
Please cite the following preprint:
@misc{christen2023resolving,
title={Resolving Legalese: A Multilingual Exploration of Negation Scope Resolution in Legal Documents},
author={Ramona Christen and Anastassia Shaitarova and Matthias Stürmer and Joel Niklaus},
year={2023},
eprint={2309.08695},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 344