louisbrulenaudet
commited on
Commit
•
bb606b7
1
Parent(s):
e80ca41
Update README.md
Browse files
README.md
CHANGED
@@ -34,6 +34,9 @@ tags:
|
|
34 |
- droit
|
35 |
- fiscalité
|
36 |
- taxation
|
|
|
|
|
|
|
37 |
pretty_name: The Laws, centralizing legal texts for better use
|
38 |
---
|
39 |
## Dataset Description
|
@@ -43,7 +46,7 @@ pretty_name: The Laws, centralizing legal texts for better use
|
|
43 |
|
44 |
<img src="assets/thumbnail.png">
|
45 |
|
46 |
-
# Laws, centralizing legal texts for better use, a community Dataset.
|
47 |
|
48 |
The Laws Dataset is a comprehensive collection of legal texts from various countries, centralized in a common format. This dataset aims to improve the development of legal AI models by providing a standardized, easily accessible corpus of global legal documents.
|
49 |
|
@@ -62,6 +65,79 @@ The primary objective of this dataset is to centralize laws from around the worl
|
|
62 |
|
63 |
By providing a standardized dataset of global legal texts, we aim to accelerate the development of AI models in the legal domain, enabling more accurate and comprehensive legal analysis across different jurisdictions.
|
64 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
## Ethical Considerations
|
66 |
|
67 |
While this dataset provides a valuable resource for legal AI development, users should be aware of the following ethical considerations:
|
@@ -78,7 +154,7 @@ If you use this dataset in your research, please use the following BibTeX entry.
|
|
78 |
```BibTeX
|
79 |
@misc{HFforLegal2024,
|
80 |
author = {Louis Brulé Naudet},
|
81 |
-
title = {The
|
82 |
year = {2024}
|
83 |
howpublished = {\url{https://huggingface.co/datasets/HFforLegal/laws}},
|
84 |
}
|
|
|
34 |
- droit
|
35 |
- fiscalité
|
36 |
- taxation
|
37 |
+
- δεξιά
|
38 |
+
- recht
|
39 |
+
- derecho
|
40 |
pretty_name: The Laws, centralizing legal texts for better use
|
41 |
---
|
42 |
## Dataset Description
|
|
|
46 |
|
47 |
<img src="assets/thumbnail.png">
|
48 |
|
49 |
+
# The Laws, centralizing legal texts for better use, a community Dataset.
|
50 |
|
51 |
The Laws Dataset is a comprehensive collection of legal texts from various countries, centralized in a common format. This dataset aims to improve the development of legal AI models by providing a standardized, easily accessible corpus of global legal documents.
|
52 |
|
|
|
65 |
|
66 |
By providing a standardized dataset of global legal texts, we aim to accelerate the development of AI models in the legal domain, enabling more accurate and comprehensive legal analysis across different jurisdictions.
|
67 |
|
68 |
+
## Dataset Structure
|
69 |
+
|
70 |
+
The dataset is organized with the following columns:
|
71 |
+
|
72 |
+
- `book`: The name or code of the law book (e.g., "Civil Code", "Penal Code")
|
73 |
+
- `document`: The full text content of the legal document
|
74 |
+
- `timestamp`: The timestamp of when the law was enacted or last updated
|
75 |
+
- `id`: A identifier for each document
|
76 |
+
- `hash`: A SHA-256 hash of the `document` for verification purposes
|
77 |
+
|
78 |
+
Easy-to-use script for hashing the `document`:
|
79 |
+
|
80 |
+
```python
|
81 |
+
import hashlib
|
82 |
+
import datasets
|
83 |
+
|
84 |
+
def hash(
|
85 |
+
text: str
|
86 |
+
) -> str:
|
87 |
+
"""
|
88 |
+
Create or update the hash of the document content.
|
89 |
+
|
90 |
+
This function takes a text input, converts it to a string, encodes it in UTF-8,
|
91 |
+
and then generates a SHA-256 hash of the encoded text.
|
92 |
+
|
93 |
+
Parameters
|
94 |
+
----------
|
95 |
+
text : str
|
96 |
+
The text content to be hashed.
|
97 |
+
|
98 |
+
Returns
|
99 |
+
-------
|
100 |
+
str
|
101 |
+
The SHA-256 hash of the input text, represented as a hexadecimal string.
|
102 |
+
"""
|
103 |
+
return hashlib.sha256(str(text).encode()).hexdigest()
|
104 |
+
|
105 |
+
dataset = dataset.map(lambda x: {"hash": hash(x["document"])})
|
106 |
+
```
|
107 |
+
|
108 |
+
## Country-based Splits
|
109 |
+
|
110 |
+
The dataset uses country-based splits to organize legal documents from different jurisdictions. Each split is identified by the ISO 3166-1 alpha-2 code of the corresponding country.
|
111 |
+
|
112 |
+
### ISO 3166-1 alpha-2 Codes
|
113 |
+
|
114 |
+
ISO 3166-1 alpha-2 codes are two-letter country codes defined in ISO 3166-1, part of the ISO 3166 standard published by the International Organization for Standardization (ISO).
|
115 |
+
|
116 |
+
Some examples of ISO 3166-1 alpha-2 codes:
|
117 |
+
- France: fr
|
118 |
+
- United States: us
|
119 |
+
- United Kingdom: gb
|
120 |
+
- Germany: de
|
121 |
+
- Japan: jp
|
122 |
+
- Brazil: br
|
123 |
+
- Australia: au
|
124 |
+
|
125 |
+
Before submitting a new split, please make sure the proposed split fits within the ISO code for the related country.
|
126 |
+
|
127 |
+
### Accessing Country-specific Data
|
128 |
+
|
129 |
+
To access legal documents for a specific country, you can use the country's ISO 3166-1 alpha-2 code as the split name when loading the dataset. Here's an example:
|
130 |
+
|
131 |
+
```python
|
132 |
+
from datasets import load_dataset
|
133 |
+
|
134 |
+
# Load the entire dataset
|
135 |
+
dataset = load_dataset("HFforLegal/laws")
|
136 |
+
|
137 |
+
# Access the French legal documents
|
138 |
+
fr_dataset = dataset['fr']
|
139 |
+
```
|
140 |
+
|
141 |
## Ethical Considerations
|
142 |
|
143 |
While this dataset provides a valuable resource for legal AI development, users should be aware of the following ethical considerations:
|
|
|
154 |
```BibTeX
|
155 |
@misc{HFforLegal2024,
|
156 |
author = {Louis Brulé Naudet},
|
157 |
+
title = {The Laws, centralizing legal texts for better use},
|
158 |
year = {2024}
|
159 |
howpublished = {\url{https://huggingface.co/datasets/HFforLegal/laws}},
|
160 |
}
|