Edit model card

Model Overview

Model Name: Equall/Saul-7B-Instruct-GDPR-v1
Base Model: Equall/Saul-7B-Instruct-v1
Finetuning Method: QLoRA (Quantized Low-Rank Adaptation)

Equall/Saul-7B-Instruct-GDPR-v1 is a fine-tuned version of the Equall/Saul-7B-Instruct-v1 model, specifically optimized for handling GDPR compliance tasks. The model is designed to generate accurate, legally grounded responses aligned with the General Data Protection Regulation (GDPR). It supports various compliance-related tasks such as data audits, legal advisory, and ensuring adherence to GDPR requirements for data controllers and processors.


GDPR Task Focus

The model is optimized to address detailed queries related to GDPR compliance, providing step-by-step instructions aligned with relevant GDPR articles. Below is an example illustrating how the model approaches a specific GDPR task related to conducting a Data Protection Impact Assessment (DPIA):

Example Task: Conducting a Data Protection Impact Assessment (DPIA)

Input: What are the benefits of conducting a DPIA?

Response:
Conducting a Data Protection Impact Assessment (DPIA) provides several benefits in ensuring GDPR compliance:

  1. Identifying Risks: DPIAs help identify and mitigate risks to individuals’ rights and freedoms by evaluating potential impacts on personal data processing activities.
  2. Accountability: DPIAs demonstrate compliance with GDPR Article 24(1), which requires controllers to implement appropriate technical and organizational measures to ensure and maintain data protection.
  3. Building Compliance: Through a structured approach, DPIAs facilitate building compliance by identifying necessary measures for processing activities.
  4. Legal Basis: A DPIA can serve as a legal basis for processing personal data, especially when dealing with special categories of data or large-scale monitoring (referencing Article 6(1)(c) and Article 9(2)(i)).
  5. Regulatory Compliance: Meeting DPIA requirements helps prevent fines imposed by supervisory authorities for non-compliance with GDPR articles, including Articles 35(1), (3)-(4), Article 35(2), Article 35(7)-(9), and Article 36(3)(e).

Relevant GDPR Articles:

  • Article 35 (DPIA requirements)
  • Article 24 (Accountability of controllers)
  • Article 6(1)(c) (Legal basis for processing)
  • Article 9(2)(i) (Processing of special categories of data)
  • Article 36(3)(e) (Consultation with supervisory authorities)

This demonstrates the model's capacity to generate structured, article-specific responses that assist organizations in navigating GDPR compliance tasks.


Fine-Tuning Methodology

The fine-tuning of this model was conducted using QLoRA (Quantized Low-Rank Adaptation) to optimize model efficiency and accuracy, particularly when handling legal texts. QLoRA enabled the fine-tuning process to maintain a high level of performance while significantly reducing the computational load by quantizing the model weights to 4-bit precision.

Training was conducted using the bwUniCluster 2.0 computing facility, utilizing Tesla V100 GPUs for efficient training over multiple iterations. Each iteration aimed to improve the model’s capacity to understand and generate responses to GDPR-specific inquiries by referencing the appropriate articles of the regulation.


Datasets

1. Training Dataset

Dataset Name: sims2k/GDPR_QA_instruct_dataset

  • Number of Entries: 316 Question-Answer pairs
  • Creation Method: This dataset was synthetically generated using ChatGPT-4 to create specialized Q&A pairs focused on GDPR compliance tasks. The dataset was carefully crafted by synthesizing information from trusted sources, including GDPR articles, Legal FAQs, and Guidelines, Recommendations, and Best Practices from the European Data Protection Board (EDPB).
    • Advanced Prompt Engineering techniques were employed, including one-shot and chain-of-thought prompting, to create precise, contextually relevant responses. The output generation was controlled using a temperature setting of zero, ensuring determinism and reliability in the responses.
    • Each dataset entry was fact-checked for accuracy and cross-referenced with the related GDPR articles, ensuring legal validity and practical utility in real-world settings.

2. Evaluation Dataset

Dataset Name: sims2k/GDPR_QA_instruct_eval_dataset

  • Number of Entries: 63 Question-Answer pairs
  • Description: This evaluation dataset was designed to rigorously test the model's ability to generalize its learning. Each entry focuses on unseen GDPR queries, ensuring the model’s ability to respond accurately to new contexts. The dataset was evaluated using advanced NLP metrics like ROUGE, BLEU, METEOR, and BERTScore, which help measure the structural and semantic quality of the responses.

Performance Metrics

The model’s performance was assessed using advanced NLP metrics to evaluate both the quality of generated text and the adherence to legal standards in GDPR queries.

Metrics Used:

  1. BLEU: Measures precision by calculating n-gram overlap between the generated response and the reference text.
  2. ROUGE: Focuses on recall, assessing how much of the reference content is captured in the generated response.
  3. METEOR: Combines both precision and recall, weighting recall more heavily and evaluating the quality of text alignment.
  4. BERTScore: Uses contextual embeddings to compare the generated and reference texts, focusing on semantic coherence.

The results are presented in the Composite Scores for All Evaluated Models graph below, showcasing the model’s performance across these metrics.

image/png

Understanding the Graph:

  • Higher Composite Scores represent a stronger performance in generating accurate, legally valid, and contextually appropriate responses.
  • Normalization was applied to all metrics using Min-Max scaling, ensuring an equal contribution of each metric to the final score.
  • Equal Weighting was used across metrics to provide a balanced assessment of the model’s capabilities.

Limitations and Future Work

Despite its strong performance in GDPR compliance tasks, the model may face challenges in handling edge cases or complex legal nuances. The model's accuracy could further be improved by expanding the dataset to include additional legal scenarios and by incorporating domain-specific datasets from other regulatory frameworks.

Future improvements will focus on:

  • Expanding the dataset size and diversity.
  • Conducting more fine-tuning iterations to address subtle legal interpretations.
  • Potentially integrating legal reasoning from other regulatory domains beyond GDPR.
Downloads last month
7
Safetensors
Model size
3.86B params
Tensor type
F32
·
FP16
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sims2k/Saul-Instruct-v1-gdpr-finetuned-v3.1

Finetuned
this model

Dataset used to train sims2k/Saul-Instruct-v1-gdpr-finetuned-v3.1