louisbrulenaudet
commited on
Commit
•
21c086d
1
Parent(s):
f76d2c5
Update README.md
Browse files
README.md
CHANGED
@@ -86,8 +86,6 @@ By providing a standardized dataset of global legal texts, we aim to accelerate
|
|
86 |
|
87 |
## Dataset Structure
|
88 |
|
89 |
-
# Dataset Structure
|
90 |
-
|
91 |
The dataset contains the following columns:
|
92 |
|
93 |
1. **jurisdiction**: Capitalized ISO 3166-1 alpha-2 code representing the country or jurisdiction. This column is useful when stacking data from different jurisdictions.
|
@@ -107,7 +105,9 @@ The dataset contains the following columns:
|
|
107 |
15. **formatted_date**: The publication date formatted as 'YYYY-MM-DD HH:MM:SS', derived from the 'date_publication' column.
|
108 |
|
109 |
This structure ensures comprehensive metadata for each legal document, facilitating easier data management, cross-referencing, and analysis across different jurisdictions and languages.
|
110 |
-
Easy-to-use script for hashing the `
|
|
|
|
|
111 |
|
112 |
```python
|
113 |
import hashlib
|
@@ -137,9 +137,86 @@ def hash(
|
|
137 |
dataset = dataset.map(lambda x: {"hash": hash(x["document"])})
|
138 |
```
|
139 |
|
140 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
141 |
|
142 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
143 |
|
144 |
### ISO 3166-1 alpha-2 Codes
|
145 |
|
@@ -156,20 +233,6 @@ Some examples of ISO 3166-1 alpha-2 codes:
|
|
156 |
|
157 |
Before submitting a new split, please make sure the proposed split fits within the ISO code for the related country.
|
158 |
|
159 |
-
### Accessing Country-specific Data
|
160 |
-
|
161 |
-
To access legal documents for a specific country, you can use the country's ISO 3166-1 alpha-2 code as the split name when loading the dataset. Here's an example:
|
162 |
-
|
163 |
-
```python
|
164 |
-
from datasets import load_dataset
|
165 |
-
|
166 |
-
# Load the entire dataset
|
167 |
-
dataset = load_dataset("HFforLegal/laws")
|
168 |
-
|
169 |
-
# Access the French legal documents
|
170 |
-
fr_dataset = dataset['fr']
|
171 |
-
```
|
172 |
-
|
173 |
## Ethical Considerations
|
174 |
|
175 |
While this dataset provides a valuable resource for legal AI development, users should be aware of the following ethical considerations:
|
|
|
86 |
|
87 |
## Dataset Structure
|
88 |
|
|
|
|
|
89 |
The dataset contains the following columns:
|
90 |
|
91 |
1. **jurisdiction**: Capitalized ISO 3166-1 alpha-2 code representing the country or jurisdiction. This column is useful when stacking data from different jurisdictions.
|
|
|
105 |
15. **formatted_date**: The publication date formatted as 'YYYY-MM-DD HH:MM:SS', derived from the 'date_publication' column.
|
106 |
|
107 |
This structure ensures comprehensive metadata for each legal document, facilitating easier data management, cross-referencing, and analysis across different jurisdictions and languages.
|
108 |
+
Easy-to-use script for hashing the `text`:
|
109 |
+
|
110 |
+
Datasets version:
|
111 |
|
112 |
```python
|
113 |
import hashlib
|
|
|
137 |
dataset = dataset.map(lambda x: {"hash": hash(x["document"])})
|
138 |
```
|
139 |
|
140 |
+
Polars version
|
141 |
+
|
142 |
+
```python
|
143 |
+
import polars as pl
|
144 |
+
import hashlib
|
145 |
+
|
146 |
+
def add_text_hash_column(
|
147 |
+
df: pl.DataFrame,
|
148 |
+
text_column: str = "text",
|
149 |
+
hash_column: str = "text_hash"
|
150 |
+
) -> pl.DataFrame:
|
151 |
+
"""
|
152 |
+
Add a column with SHA-256 hash values of a specified text column to a Polars DataFrame.
|
153 |
+
|
154 |
+
This function computes the SHA-256 hash of the values in the specified text column
|
155 |
+
and adds it as a new column to the DataFrame.
|
156 |
|
157 |
+
Parameters
|
158 |
+
----------
|
159 |
+
df : pl.DataFrame
|
160 |
+
The input Polars DataFrame.
|
161 |
+
text_column : str, optional
|
162 |
+
The name of the column containing the text to be hashed (default is "text").
|
163 |
+
hash_column : str, optional
|
164 |
+
The name of the new column to be added with hash values (default is "text_hash").
|
165 |
+
|
166 |
+
Returns
|
167 |
+
-------
|
168 |
+
pl.DataFrame
|
169 |
+
A new DataFrame with the hash column added.
|
170 |
+
|
171 |
+
Examples
|
172 |
+
--------
|
173 |
+
>>> import polars as pl
|
174 |
+
>>> df = pl.DataFrame({"text": ["Hello", "World", "OpenAI"]})
|
175 |
+
>>> df_with_hash = add_text_hash_column(df)
|
176 |
+
>>> print(df_with_hash)
|
177 |
+
shape: (3, 2)
|
178 |
+
┌────────┬──────────────────────────────────────────────────────────────────┐
|
179 |
+
│ text ┆ text_hash │
|
180 |
+
│ --- ┆ --- │
|
181 |
+
│ str ┆ str │
|
182 |
+
╞════════╪══════════════════════════════════════════════════════════════════╡
|
183 |
+
│ Hello ┆ 185f8db32271fe25f561a6fc938b2e264306ec304eda518007d1764826381969 │
|
184 |
+
│ World ┆ 78ae647dc5544d227130a0682a51e30bc7777fbb6d8a8f17007463a3ecd1d524 │
|
185 |
+
│ OpenAI ┆ 0c3f4a61f7e5d29abc29d63f1a0cad36c49ffd5b5c0b5b38ce0c7aa0bdc94696 │
|
186 |
+
└────────┴──────────────────────────────────────────────────────────────────┘
|
187 |
+
|
188 |
+
Raises
|
189 |
+
------
|
190 |
+
ValueError
|
191 |
+
If the specified text_column is not found in the DataFrame.
|
192 |
+
"""
|
193 |
+
if text_column not in df.columns:
|
194 |
+
raise ValueError(f"Column '{text_column}' not found in the DataFrame.")
|
195 |
+
|
196 |
+
return df.with_columns(
|
197 |
+
pl.col(text_column)
|
198 |
+
.map_elements(lambda x: hashlib.sha256(str(x).encode()).hexdigest())
|
199 |
+
.alias(hash_column)
|
200 |
+
)
|
201 |
+
|
202 |
+
dataframe = add_text_hash_column(dataframe)
|
203 |
+
```
|
204 |
+
|
205 |
+
## Upload to the Hub
|
206 |
+
|
207 |
+
Here is a code snippet to push dedicated config (in this example for France) to the HF hub:
|
208 |
+
|
209 |
+
```python
|
210 |
+
hf_dataset.push_to_hub(
|
211 |
+
repo_id="HFforLegal/laws-new",
|
212 |
+
config_name="fr",
|
213 |
+
split="train",
|
214 |
+
)
|
215 |
+
```
|
216 |
+
|
217 |
+
## Country-based Configs
|
218 |
+
|
219 |
+
The dataset uses country-based configs to organize legal documents from different jurisdictions. Each config is identified by the ISO 3166-1 alpha-2 code of the corresponding country.
|
220 |
|
221 |
### ISO 3166-1 alpha-2 Codes
|
222 |
|
|
|
233 |
|
234 |
Before submitting a new split, please make sure the proposed split fits within the ISO code for the related country.
|
235 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
236 |
## Ethical Considerations
|
237 |
|
238 |
While this dataset provides a valuable resource for legal AI development, users should be aware of the following ethical considerations:
|