|
--- |
|
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct |
|
library_name: peft |
|
--- |
|
|
|
# Model Card for LLaMA 3.1 8B Instruct - Cybersecurity Fine-tuned |
|
|
|
This model is a fine-tuned version of the LLaMA 3.1 8B Instruct model, specifically adapted for cybersecurity-related tasks. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
This model is based on the LLaMA 3.1 8B Instruct model and has been fine-tuned on a custom dataset of cybersecurity-related questions and answers. It is designed to provide more accurate and relevant responses to queries in the cybersecurity domain. |
|
|
|
- **Developed by:** [Your Name/Organization] |
|
- **Model type:** Instruct-tuned Large Language Model |
|
- **Language(s) (NLP):** English (primary), with potential for limited multilingual capabilities |
|
- **License:** [Specify the license, likely related to the original LLaMA 3.1 license] |
|
- **Finetuned from model:** meta-llama/Meta-Llama-3.1-8B-Instruct |
|
|
|
### Model Sources [optional] |
|
|
|
- **Repository:** [Link to your Hugging Face repository] |
|
- **Paper [optional]:** [If you've written a paper about this fine-tuning, link it here] |
|
- **Demo [optional]:** [If you have a demo of the model, link it here] |
|
|
|
## Uses |
|
|
|
### Direct Use |
|
|
|
This model can be used for a variety of cybersecurity-related tasks, including: |
|
- Answering questions about cybersecurity concepts and practices |
|
- Providing explanations of cybersecurity threats and vulnerabilities |
|
- Assisting in the interpretation of security logs and indicators of compromise |
|
- Offering guidance on best practices for cyber defense |
|
|
|
### Out-of-Scope Use |
|
|
|
This model should not be used for: |
|
- Generating or assisting in the creation of malicious code |
|
- Providing legal or professional security advice without expert oversight |
|
- Making critical security decisions without human verification |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
- The model may reflect biases present in its training data and the original LLaMA 3.1 model. |
|
- It may occasionally generate incorrect or inconsistent information, especially for very specific or novel cybersecurity topics. |
|
- The model's knowledge is limited to its training data cutoff and does not include real-time threat intelligence. |
|
|
|
### Recommendations |
|
|
|
Users should verify critical information and consult with cybersecurity professionals for important decisions. The model should be used as an assistant tool, not as a replacement for expert knowledge or up-to-date threat intelligence. |
|
|
|
## How to Get Started with the Model |
|
|
|
Use the following code to get started with the model: |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
from peft import PeftModel, PeftConfig |
|
|
|
# Load the model |
|
model_name = "your-username/llama3-cybersecurity" |
|
config = PeftConfig.from_pretrained(model_name) |
|
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path) |
|
model = PeftModel.from_pretrained(model, model_name) |
|
|
|
# Load the tokenizer |
|
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) |
|
|
|
# Example usage |
|
prompt = "What are some common indicators of a ransomware attack?" |
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
outputs = model.generate(**inputs, max_length=200) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
``` |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
The model was fine-tuned on a custom dataset of cybersecurity-related questions and answers. [Add more details about your dataset here] |
|
|
|
### Training Procedure |
|
|
|
#### Training Hyperparameters |
|
|
|
- **Training regime:** bf16 mixed precision |
|
- **Optimizer:** AdamW |
|
- **Learning rate:** 5e-5 |
|
- **Batch size:** 4 |
|
- **Gradient accumulation steps:** 4 |
|
- **Epochs:** 5 |
|
- **Max steps:** 4000 |
|
|
|
## Evaluation |
|
|
|
I used a custom yara evulation |
|
## Environmental Impact |
|
|
|
- **Hardware Type:** NVIDIA A100 |
|
- **Hours used:** 12 Hours |
|
- **Cloud Provider:** vast.io |
|
|
|
|
|
## Technical Specifications [optional] |
|
|
|
### Model Architecture and Objective |
|
|
|
This model uses the LLaMA 3.1 8B architecture with additional LoRA adapters for fine-tuning. It was trained using a causal language modeling objective on cybersecurity-specific data. |
|
|
|
### Compute Infrastructure |
|
|
|
#### Hardware |
|
|
|
"Single NVIDIA A100 GPU" |
|
|
|
#### Software |
|
|
|
- Python 3.8+ |
|
- PyTorch 2.0+ |
|
- Transformers 4.28+ |
|
- PEFT 0.12.0 |
|
|
|
## Model Card Authors [optional] |
|
|
|
Wyatt Roersma |
|
|
|
## Model Card Contact |
|
|
|
Email me at [email protected] with questions. |
|
``` |
|
|
|
This README.md provides a comprehensive overview of your fine-tuned model, including its purpose, capabilities, limitations, and technical details. You should replace the placeholder text (like "[Your Name/Organization]") with the appropriate information. Additionally, you may want to expand on certain sections, such as the evaluation metrics and results, if you have more specific data available from your fine-tuning process. |
|
|