Datasets:
metadata
dataset_info:
features:
- name: ref
dtype: string
- name: title_main
dtype: string
- name: texte
dtype: string
- name: dateDebut
dtype: int64
- name: dateFin
dtype: int64
- name: num
dtype: string
- name: id
dtype: string
- name: cid
dtype: string
- name: type
dtype: string
- name: etat
dtype: string
- name: nota
dtype: string
- name: version_article
dtype: string
- name: ordre
dtype: int64
- name: conditionDiffere
dtype: 'null'
- name: infosComplementaires
dtype: 'null'
- name: surtitre
dtype: 'null'
- name: nature
dtype: string
- name: texteHtml
dtype: string
- name: dateFinExtension
dtype: int64
- name: versionPrecedente
dtype: string
- name: refInjection
dtype: string
- name: idTexte
dtype: 'null'
- name: idTechInjection
dtype: string
- name: origine
dtype: string
- name: dateDebutExtension
dtype: int64
- name: idEliAlias
dtype: 'null'
- name: cidTexte
dtype: 'null'
- name: sectionParentId
dtype: string
- name: multipleVersions
dtype: bool
- name: comporteLiensSP
dtype: bool
- name: sectionParentTitre
dtype: string
- name: infosRestructurationBranche
dtype: 'null'
- name: idEli
dtype: 'null'
- name: sectionParentCid
dtype: string
- name: numeroBo
dtype: 'null'
- name: infosRestructurationBrancheHtml
dtype: 'null'
- name: historique
dtype: 'null'
- name: infosComplementairesHtml
dtype: 'null'
- name: renvoi
dtype: 'null'
- name: fullSectionsTitre
dtype: string
- name: notaHtml
dtype: string
- name: inap
dtype: 'null'
splits:
- name: train
num_bytes: 432536720
num_examples: 153983
download_size: 185292857
dataset_size: 432536720
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-generation
language:
- fr
tags:
- legal
- law
- droit
- fiscalité
pretty_name: Romulus, continued pre-trained models for French law
Romulus, continually pre-trained models for French law.
Romulus is a series of continually pre-trained models enriched in French law and intended to serve as the basis for a fine-tuning process on labeled data. Please note that these models have not been aligned for the production of usable text as they stand, and will certainly need to be fine-tuned for the desired tasks in order to produce satisfactory results.
The training corpus is made up of around 34,864,949 tokens (calculated with the meta-llama/Meta-Llama-3.1-8B tokenizer).
Citing & Authors
If you use this code in your research, please use the following BibTeX entry.
@misc{louisbrulenaudet2024,
author = {Louis Brulé Naudet},
title = {Romulus, continually pre-trained models for French law},
year = {2024}
howpublished = {\url{https://huggingface.co/datasets/louisbrulenaudet/Romulus-cpt-fr}},
}
Feedback
If you have any feedback, please reach out at [email protected].