Datasets:
rcds
/

tobiasbrugger commited on
Commit
c3d9221
2 Parent(s): 381535e 7fe01bf

Merge branch 'main' of https://huggingface.co/datasets/rcds/MultiLegalSBD

Browse files
Files changed (1) hide show
  1. README.md +30 -4
README.md CHANGED
@@ -1138,7 +1138,7 @@ size_categories:
1138
 
1139
  ### Dataset Summary
1140
 
1141
- [More Information Needed]
1142
 
1143
  ### Supported Tasks and Leaderboards
1144
 
@@ -1146,21 +1146,47 @@ size_categories:
1146
 
1147
  ### Languages
1148
 
1149
- [More Information Needed]
1150
 
1151
  ## Dataset Structure
1152
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1153
  ### Data Instances
1154
 
1155
  [More Information Needed]
1156
 
1157
  ### Data Fields
1158
 
1159
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
1160
 
1161
  ### Data Splits
1162
 
1163
- [More Information Needed]
1164
 
1165
  ## Dataset Creation
1166
 
 
1138
 
1139
  ### Dataset Summary
1140
 
1141
+ This is a multilingual dataset containing ~130k annotated sentence boundaries. It contains laws and court decision in 6 different languages.
1142
 
1143
  ### Supported Tasks and Leaderboards
1144
 
 
1146
 
1147
  ### Languages
1148
 
1149
+ English, French, Italian, German, Portuguese, Spanish
1150
 
1151
  ## Dataset Structure
1152
 
1153
+ It is structured in the following format: {language}\_{type}\_{shard}.jsonl.xz
1154
+
1155
+ type is one of the following:
1156
+ - laws
1157
+ - judgements
1158
+
1159
+ Use the the dataset like this:
1160
+ ```
1161
+ from datasets import load_dataset
1162
+ config = 'fr_laws' #{language}_{type} | to load all languages and/or all types, use 'all_all'
1163
+ dataset = load_dataset('rdcs/MultiLegalSBD', config)
1164
+ ```
1165
+
1166
  ### Data Instances
1167
 
1168
  [More Information Needed]
1169
 
1170
  ### Data Fields
1171
 
1172
+ - text: the original text
1173
+ - spans:
1174
+ - start: offset of the first character
1175
+ - end: offset of the last character
1176
+ - label: One label only -> Sentence
1177
+ - token_start: id of the first token
1178
+ - token_end: id of the last token
1179
+ - tokens:
1180
+ - text: token text
1181
+ - start: offset of the first character
1182
+ - end: offset of the last character
1183
+ - id: token id
1184
+ - ws: whether the token is followed by whitespace
1185
+
1186
 
1187
  ### Data Splits
1188
 
1189
+ There is only one split available
1190
 
1191
  ## Dataset Creation
1192