|
--- |
|
language: |
|
- fr |
|
license: cc-by-4.0 |
|
task_categories: |
|
- question-answering |
|
- text-generation |
|
- table-question-answering |
|
pretty_name: The Laws, centralizing legal texts for better use |
|
dataset_info: |
|
- config_name: fr |
|
features: |
|
- name: jurisdiction |
|
dtype: string |
|
- name: language |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: html |
|
dtype: string |
|
- name: title_main |
|
dtype: string |
|
- name: title_alternative |
|
dtype: string |
|
- name: id_sub |
|
dtype: string |
|
- name: id_main |
|
dtype: string |
|
- name: url_sourcepage |
|
dtype: string |
|
- name: url_sourcefile |
|
dtype: 'null' |
|
- name: date_publication |
|
dtype: string |
|
- name: date_signature |
|
dtype: 'null' |
|
- name: uuid |
|
dtype: string |
|
- name: text_hash |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 531412652 |
|
num_examples: 162702 |
|
download_size: 212898761 |
|
dataset_size: 531412652 |
|
configs: |
|
- config_name: fr |
|
data_files: |
|
- split: train |
|
path: fr/train-* |
|
tags: |
|
- legal |
|
- droit |
|
- fiscalité |
|
- taxation |
|
- δεξιά |
|
- recht |
|
- derecho |
|
--- |
|
## Dataset Description |
|
- **Repository:** https://huggingface.co/datasets/HFforLegal/laws |
|
- **Leaderboard:** N/A |
|
- **Point of Contact:** [Louis Brulé Naudet](mailto:[email protected]) |
|
|
|
<img src="assets/thumbnail.png"> |
|
|
|
# The Laws, centralizing legal texts for better use, a community Dataset. |
|
|
|
The Laws Dataset is a comprehensive collection of legal texts from various countries, centralized in a common format. This dataset aims to improve the development of legal AI models by providing a standardized, easily accessible corpus of global legal documents. |
|
|
|
<div class="not-prose bg-gradient-to-r from-gray-50-to-white text-gray-900 border" style="border-radius: 8px; padding: 0.5rem 1rem;"> |
|
<p>Join us in our mission to make AI more accessible and understandable for the legal world, ensuring that the power of language models can be harnessed effectively and ethically in the pursuit of justice.</p> |
|
</div> |
|
|
|
## Objective |
|
|
|
The primary objective of this dataset is to centralize laws from around the world in a common format, thereby facilitating: |
|
|
|
1. Comparative legal studies |
|
2. Development of multilingual legal AI models |
|
3. Cross-jurisdictional legal research |
|
4. Improvement of legal technology tools |
|
|
|
By providing a standardized dataset of global legal texts, we aim to accelerate the development of AI models in the legal domain, enabling more accurate and comprehensive legal analysis across different jurisdictions. |
|
|
|
## Dataset Structure |
|
|
|
# Dataset Structure |
|
|
|
The dataset contains the following columns: |
|
|
|
1. **jurisdiction**: Capitalized ISO 3166-1 alpha-2 code representing the country or jurisdiction. This column is useful when stacking data from different jurisdictions. |
|
2. **language**: Non-capitalized ISO 639-1 code representing the language of the document. This is particularly useful for multilingual jurisdictions. |
|
3. **text**: The main textual content of the document. |
|
4. **html**: An HTML-structured version of the text. This may include additional structure such as XML (Akoma Ntoso). |
|
5. **title_main**: The primary title of the document. This replaces the 'book' column, as many modern laws are not structured or referred to as 'books'. |
|
6. **title_alternative**: A list of official and non-official (nickname) titles for the document. |
|
7. **id_sub**: Identifier for lower granularity items within the document, such as specific article numbers. This replaces the 'id' column. |
|
8. **id_main**: Document identifier for the main document, such as the European Legislation Identifier (ELI). |
|
9. **url_sourcepage**: The source URL of the web page where the document is published. |
|
10. **url_sourcefile**: The source URL of the document file (e.g., PDF file). |
|
11. **date_publication**: The date when the document was published. |
|
12. **date_signature**: The date when the document was signed. |
|
13. **uuid**: A universally unique identifier for each row in the dataset. |
|
14. **text_hash**: A SHA-256 hash of the 'text' column, useful for verifying data integrity. |
|
15. **formatted_date**: The publication date formatted as 'YYYY-MM-DD HH:MM:SS', derived from the 'date_publication' column. |
|
|
|
This structure ensures comprehensive metadata for each legal document, facilitating easier data management, cross-referencing, and analysis across different jurisdictions and languages. |
|
Easy-to-use script for hashing the `document`: |
|
|
|
```python |
|
import hashlib |
|
import datasets |
|
|
|
def hash( |
|
text: str |
|
) -> str: |
|
""" |
|
Create or update the hash of the document content. |
|
|
|
This function takes a text input, converts it to a string, encodes it in UTF-8, |
|
and then generates a SHA-256 hash of the encoded text. |
|
|
|
Parameters |
|
---------- |
|
text : str |
|
The text content to be hashed. |
|
|
|
Returns |
|
------- |
|
str |
|
The SHA-256 hash of the input text, represented as a hexadecimal string. |
|
""" |
|
return hashlib.sha256(str(text).encode()).hexdigest() |
|
|
|
dataset = dataset.map(lambda x: {"hash": hash(x["document"])}) |
|
``` |
|
|
|
## Country-based Splits |
|
|
|
The dataset uses country-based splits to organize legal documents from different jurisdictions. Each split is identified by the ISO 3166-1 alpha-2 code of the corresponding country. |
|
|
|
### ISO 3166-1 alpha-2 Codes |
|
|
|
ISO 3166-1 alpha-2 codes are two-letter country codes defined in ISO 3166-1, part of the ISO 3166 standard published by the International Organization for Standardization (ISO). |
|
|
|
Some examples of ISO 3166-1 alpha-2 codes: |
|
- France: fr |
|
- United States: us |
|
- United Kingdom: gb |
|
- Germany: de |
|
- Japan: jp |
|
- Brazil: br |
|
- Australia: au |
|
|
|
Before submitting a new split, please make sure the proposed split fits within the ISO code for the related country. |
|
|
|
### Accessing Country-specific Data |
|
|
|
To access legal documents for a specific country, you can use the country's ISO 3166-1 alpha-2 code as the split name when loading the dataset. Here's an example: |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
# Load the entire dataset |
|
dataset = load_dataset("HFforLegal/laws") |
|
|
|
# Access the French legal documents |
|
fr_dataset = dataset['fr'] |
|
``` |
|
|
|
## Ethical Considerations |
|
|
|
While this dataset provides a valuable resource for legal AI development, users should be aware of the following ethical considerations: |
|
|
|
- Privacy: Ensure that all personal information has been properly anonymized. |
|
- Bias: Be aware of potential biases in the source material and in the selection of included laws. |
|
- Currency: Laws change over time. Always verify that you're working with the most up-to-date version of a law for any real-world application. |
|
- Jurisdiction: Legal interpretations can vary by jurisdiction. AI models trained on this data should not be used as a substitute for professional legal advice. |
|
|
|
## Citing & Authors |
|
|
|
If you use this dataset in your research, please use the following BibTeX entry. |
|
|
|
```BibTeX |
|
@misc{HFforLegal2024, |
|
author = {Louis Brulé Naudet}, |
|
title = {The Laws, centralizing legal texts for better use}, |
|
year = {2024} |
|
howpublished = {\url{https://huggingface.co/datasets/HFforLegal/laws}}, |
|
} |
|
``` |
|
|
|
## Feedback |
|
|
|
If you have any feedback, please reach out at [[email protected]](mailto:[email protected]). |