Orcawise's picture
Update README.md
f1c82e1 verified
|
raw
history blame
4.78 kB
---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
model-index:
- name: eu-ai-act-align
results: []
pipeline_tag: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eu-ai-act-align
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on over 1000 questions and answers surrounding the EU AI Act.
It achieves the following results on the evaluation set:
- Loss: 1.7628
## Model description
EU AI Act Model
ORCAWISE CONTRIBUTORS:
Ikechukwu Ogbuchi, Yu Li, Dami Akinniranye, Wahab Adesanya, Sanpreet Singh, Kevin Neary
OVERVIEW:
The EU AI Act Model, powered by GEMMA, is a specialized language model designed to provide expert-level insights and guidance on the European Union's Artificial Intelligence Act. This model has been meticulously fine-tuned using a dataset of 1,000 rigorously curated questions and answers pertaining to various aspects of the AI Act. It aims to serve as a reliable resource for legal and compliance professionals, aiding organizations in understanding and adhering to the requirements set forth by the EU regulations.
KEY FEATURES:
Domain-Specific Expertise: Tailored to address the intricacies and legal nuances of the EU AI Act, making it an invaluable tool for stakeholders needing precise and actionable information.
Compliance Assistance: Facilitates a better understanding of compliance obligations under the AI Act, helping organizations mitigate risks and avoid potential penalties.
Accessible Guidance: Designed to simplify complex legal language, making the regulations accessible and understandable to professionals without a legal background.
USE CASES:
Legal Consultation: Assist legal professionals by providing quick answers to common questions about the EU AI Act.
Compliance Strategy Development: Support compliance officers in developing and refining AI governance frameworks.
Educational Tool: Serve as an educational resource for training corporate teams on the implications and requirements of the AI Act.
HOW TO USE:
To use the EU AI Act Model, simply input a question related to the EU AI Act, and the model will generate a response that reflects the most current understanding of the act's provisions. Example queries include:
"What are the obligations for high-risk AI systems under the EU AI Act?"
"How does the AI Act classify AI systems?"
"What are the penalties for non-compliance with the AI Act?"
MODEL TRAINING AND PERFORMANCE:
The model was trained with a diverse set of questions covering all key areas of the EU AI Act, ensuring a broad and deep understanding of the subject matter. While GEMMA provides high-quality outputs, users should consider these as a starting point for further legal consultation and not as standalone legal advice.
CONTRIBUTIONS AND FEEDBACK:
We welcome contributions and feedback from the community to continuously improve the model's accuracy and usability. If you have suggestions or want to contribute to the training dataset, please contact us through our Huggingface repository. information needed
## Intended uses & limitations
It is intended to be used as a preliminary guide to understading the Act, but detailed information about the act can be verified via official public documents. It is important that questions are framed with respect to the EU AI Act, rather than generic or non specific questions for a good model response.
## Training and evaluation data
Training was done with 1023 questions and answer pairs and finetuned on the Gemma 2b model.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.139 | 1.0 | 230 | 1.9804 |
| 1.9368 | 2.0 | 460 | 1.8491 |
| 1.8613 | 3.0 | 690 | 1.8011 |
| 1.8008 | 4.0 | 920 | 1.7763 |
| 1.7447 | 5.0 | 1150 | 1.7634 |
| 1.6942 | 6.0 | 1380 | 1.7563 |
| 1.6558 | 7.0 | 1610 | 1.7513 |
| 1.6192 | 8.0 | 1840 | 1.7446 |
| 1.5782 | 9.0 | 2070 | 1.7573 |
| 1.5463 | 10.0 | 2300 | 1.7628 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2