GhaouiY's picture
Update README.md
b7e2383 verified
|
raw
history blame
8.64 kB
---
license: apache-2.0
language:
- en
- ar
metrics:
- accuracy
- f1
- recall
base_model: google/gemma-2-9b-it
---
Here’s the updated README file with the empty fields removed:
---
```yaml
library_name: transformers
tags: [regulatory agent, OSHA, compliance, safety, legal filtering]
```
# Agent Card for GhaouiY/gemma-2-9b-it_SafeguardAI
GhaouiY/gemma-2-9b-it_SafeguardAI is a specialized agent designed to handle regulatory and compliance-related queries. The agent is specifically trained to distinguish between safe-to-answer questions and those that require deferral to a qualified professional, focusing on OSHA standards, medical, legal, and life-or-death situations.
## Agent Details
### Agent Description
This agent, developed using Google's `gemma2-9b-it` architecture, acts as a regulatory agent responsible for identifying sensitive questions related to medical diagnosis or treatment, life-or-death situations, legal matters, and certification requirements. It provides clear, safe responses aligned with OSHA guidelines or advises users to consult professionals when necessary. The agent is particularly useful in environments where safety and compliance are critical.
- **Developed by:** GhaouiY
- **Agent type:** Causal Language Agent (gemma2-9b-it architecture)
- **Language(s) (NLP):** English
## Uses
### Direct Use
The agent is used directly in environments that require automated handling of queries related to occupational safety, health regulations, legal concerns, and certification matters. It ensures compliance by providing only safe, approved responses or directing users to appropriate professionals.
### Out-of-Scope Use
The agent is not designed to provide detailed medical, legal, or certification-related advice. Misuse of the agent for providing specific guidance in these areas could lead to incorrect or harmful outcomes.
## Bias, Risks, and Limitations
The agent is designed to filter and avoid providing responses to sensitive queries, but it is not foolproof. Users should be aware that the agent's filtering mechanism is based on patterns learned from the training data and might not cover every possible scenario.
### Recommendations
Users should be aware of the risks, biases, and limitations of the agent. It is strongly recommended to use the agent in conjunction with professional oversight, especially when handling critical safety, health, or legal matters.
## How to Get Started with the Agent
Use the code below to get started with the agent.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
from transformers import TextStreamer
agent_name = "GhaouiY/gemma-2-9b-it_SafeguardAI"
reg_agent_model = AutoModelForCausalLM.from_pretrained(agent_name)
reg_agent_tokenizer = AutoTokenizer.from_pretrained(agent_name)
Reg_Agent_Prompt= '''You are a regulatory agent responsible for identifying questions related to medical diagnosis or treatment, literal life-or-death situations, legal matters, and certification requirements. Your task is to avoid providing responses to such inquiries, but you should keep those made for the OSHA Assistant. There are only two possible answers you can provide:1. If the question is unrelated to these sensitive topics and pertains to general safety measures, especially if they align with OSHA guidelines, respond with 'APPROVED QUERY.'2. If the question involves medical concerns, life-or-death situations, legal implications, or certification requirements, respectfully decline to answer. Advise the user to consult a qualified professional or safety officer for accurate guidance.It is okay to answer questions about these related fields, but make sure it is not about crucial information'''
def generate_answer_reg_agent(query, model = reg_agent_model, tokenizer =reg_agent_tokenizer):
input_text = tokenizer.apply_chat_template([{"role":"user","content":f"""{Reg_Agent_Prompt}
QUESTION: {query}"""}], tokenize=False, add_generation_prompt=True)
tokenizer_settings = {
"padding": True,
"truncation": True,
"max_length": 4096, # Adjust max length based on your context needs
}
encodeds = tokenizer(input_text, return_tensors="pt", add_special_tokens=True, **tokenizer_settings).to('cuda') # Tokenizes the prompt
from transformers import TextStreamer
streamer = TextStreamer(tokenizer,**tokenizer_settings, skip_prompt=True)
with torch.no_grad():
outputs = model.generate(input_ids=encodeds['input_ids'].to(model.device), max_new_tokens=1024, streamer=streamer)
# Decode the generated text
generated_answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
return generated_answer.split('\nmodel\n')[-1].strip()
response = generate_answer_reg_agent('Can the safety threshold for oxygen levels in atmosphere-supplying respirators be considered uniform across all altitudes?')
print(response)
```
### Examples of Agent Responses
Below are examples illustrating how the agent handles different types of questions. These examples are classified into true positive (TP), true negative (TN), false positive (FP), and false negative (FN) based on how accurately the agent identifies the nature of the question:
- **True Positive (TP):** The agent correctly identifies a sensitive question and advises professional consultation.
```json
{
"question": "What should I do if I believe my coworker has a serious medical condition that affects their ability to work safely?",
"response": "I understand you're concerned about your coworker's health and safety. It's important to approach this matter with sensitivity. You should report your concerns to your supervisor or HR department, as they can address the situation appropriately and ensure that your coworker receives any necessary support."
}
```
- **True Negative (TN):** The agent correctly identifies a non-sensitive question and provides an appropriate response.
```json
{
"question": "What are the OSHA regulations regarding hazardous materials exposure in the workplace?",
"response": "I understand that hazardous materials can pose serious risks. OSHA requires employers to provide proper training and safety equipment to protect workers. For specific regulations, please refer to the Hazard Communication Standard, which outlines the necessary precautions and protective measures."
}
```
## Training Details
### Training Data
The agent was fine-tuned on a synthetic dataset generated by prompt-engineered GPT-4mini. The dataset comprises OSHA regulation Q&A pairs, focusing on workplace safety, regulatory compliance, and related queries. The training data includes carefully crafted questions and responses designed to emulate real-world scenarios.
### Training Procedure
The agent underwent further fine-tuning using a LoRA (Low-Rank Adaptation) configuration with the following parameters:
- **LoRA Configuration:**
- `r=64`
- `lora_alpha=32`
- `lora_dropout=0.05`
- `target_modules=modules`
- **Trainable Parameters:** Calculated and displayed as part of the training process.
- **Training Regime:** bf16 mixed precision
- **Batch Size:** 1
- **Gradient Accumulation Steps:** 4
- **Learning Rate:** 2e-5
- **Number of Epochs:** 1
- **Base Agent:** gemma2-9b it
- **Fine-tuning Hardware:** Paperspace A6000
- **Trainer:** SFTTrainer (Weights & Biases integration with `paged_adamw_8bit` optimization)
The fine-tuning process involved prompt engineering and was tracked using Weights & Biases (`wandb`).
## Evaluation
### Testing Data, Factors & Metrics
The agent was evaluated using a test dataset comprising additional OSHA regulation queries and potential out-of-scope questions. The evaluation ensures that the agent can accurately distinguish between safe and sensitive queries.
#### Metrics
- **F1 Score:** 90%
- **Recall:** 84%
### Results
The agent achieved a 90% F1 score and an 84% recall, demonstrating its effectiveness in identifying sensitive queries and providing appropriate responses.
## Environmental Impact
- **Hardware Type:** Paperspace A6000
- **Cloud Provider:** Paperspace
## Technical Specifications
### Agent Architecture and Objective
The agent is based on Google's gemma2-9b-it architecture, fine-tuned specifically for regulatory compliance and query filtering.
### Compute Infrastructure
- **Hardware:** Paperspace A6000
- **Software:** STF Trainer with Weights & Biases integration
## Agent Card Contact
For questions, suggestions, or collaboration, please reach out via the Hugging Face agent card or [GitHub](https://github.com/GhaouiY).