language:
- fr
license: apache-2.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-generation
- table-question-answering
- summarization
pretty_name: Bulletin officiel des finances publiques - impôts
tags:
- finetuning
- legal
- french law
- droit français
- Bofip
dataset_info:
features:
- name: type
dtype: string
- name: titre
dtype: string
- name: debut_de_validite
dtype: string
- name: serie
dtype: string
- name: division
dtype: string
- name: identifiant_juridique
dtype: string
- name: permalien
dtype: string
- name: contenu
dtype: string
- name: contenu_html
dtype: string
splits:
- name: train
num_bytes: 185469381
num_examples: 8621
download_size: 78744050
dataset_size: 185469381
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Bulletin officiel des finances publiques - impôts, non-instruct (11-12-2023)
This project focuses on fine-tuning pre-trained language models to create efficient and accurate models for legal practice.
Fine-tuning is the process of adapting a pre-trained model to perform specific tasks or cater to particular domains. It involves adjusting the model's parameters through a further round of training on task-specific or domain-specific data. While conventional fine-tuning strategies involve supervised learning with labeled data, instruction-based fine-tuning introduces a more structured and interpretable approach.
Instruction-based fine-tuning leverages the power of human-provided instructions to guide the model's behavior. These instructions can be in the form of text prompts, prompts with explicit task descriptions, or a combination of both. This approach allows for a more controlled and context-aware interaction with the LLM, making it adaptable to a multitude of specialized tasks.
Instruction-based fine-tuning significantly enhances the performance of LLMs in the following ways:
- Task-Specific Adaptation: LLMs, when fine-tuned with specific instructions, exhibit remarkable adaptability to diverse tasks. They can switch seamlessly between translation, summarization, and question-answering, guided by the provided instructions.
- Reduced Ambiguity: Traditional LLMs might generate ambiguous or contextually inappropriate responses. Instruction-based fine-tuning allows for a clearer and more context-aware generation, reducing the likelihood of nonsensical outputs.
- Efficient Knowledge Transfer: Instructions can encapsulate domain-specific knowledge, enabling LLMs to benefit from expert guidance. This knowledge transfer is particularly valuable in fields like tax practice, law, medicine, and more.
- Interpretability: Instruction-based fine-tuning also makes LLM behavior more interpretable. Since the instructions are human-readable, it becomes easier to understand and control model outputs.
- Adaptive Behavior: LLMs, post instruction-based fine-tuning, exhibit adaptive behavior that is responsive to both explicit task descriptions and implicit cues within the provided text.
Citing this project
If you use this code in your research, please use the following BibTeX entry.
@misc{louisbrulenaudet2023,
author = {Louis Brulé Naudet},
title = {Bulletin officiel des finances publiques - impôts, non-instruct (11-12-2023)},
howpublished = {\url{https://huggingface.co/datasets/louisbrulenaudet/bofip}},
year = {2023}
}
Feedback
If you have any feedback, please reach out at [email protected].