Edit model card

Uploaded model

  • Developed by: dad1909 (Huynh Dac Tan Dat)
  • License: RMIT

Model Card for dad1909/CyberSentinel

This repo contains 4-bit quantized (using bitsandbytes) model of Meta's Meta-Llama-3-8B-Instruct

Model Details

  • ** Model creator: Meta
  • ** Original model: Meta-Llama-3-8B-Instruct

Code running in google colab using text_streamer (Recommend):

%%capture
# Installs Unsloth, Xformers (Flash Attention) and all other packages!
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install --no-deps xformers trl peft accelerate bitsandbytes
# Uninstall and reinstall xformers with CUDA support
!pip uninstall -y xformers
!pip install xformers[cuda]
from unsloth import FastLanguageModel
import torch
from transformers import TextStreamer

max_seq_length = 1028  # Choose any! We auto support RoPE Scaling internally!
dtype = torch.float16  # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True  # Use 4bit quantization to reduce memory usage. Can be False.

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="dad1909/CyberSentinel",
    max_seq_length=max_seq_length,
    dtype=dtype,
    load_in_4bit=load_in_4bit
)

alpaca_prompt = """Below is a code snippet. Identify the line of code that is vulnerable and describe the type of software vulnerability.

### Code Snippet:
{}

### Vulnerability Description:
{}"""

# alpaca_prompt = Copied from above
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
    alpaca_prompt.format(
        "import sqlite3\n\ndef create_table():\n    conn = sqlite3.connect(':memory:')\n    c = conn.cursor()\n    c.execute('''CREATE TABLE users (id INTEGER PRIMARY KEY, username TEXT, password TEXT)''')\n    c.execute(\"INSERT INTO users (username, password) VALUES ('user1', 'pass1')\")\n    c.execute(\"INSERT INTO users (username, password) VALUES ('user2', 'pass2')\")\n    conn.commit()\n    return conn\n\ndef vulnerable_query(conn, username):\n    c = conn.cursor()\n    query = f\"SELECT * FROM users WHERE username = '{username}'\"\n    print(f\"Executing query: {query}\")\n    c.execute(query)\n    return c.fetchall()\n\n# Create a database and a table\nconn = create_table()\n\n# Simulate a user input with SQL injection\nuser_input = \"' OR '1'='1\"\nresults = vulnerable_query(conn, user_input)\n\n# Print the results\nprint(\"Results of the query:\")\nfor row in results:\n    print(row)\n\n# Close the connection\nconn.close()\n", # instruction
        "",
    )
], return_tensors = "pt").to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 1028)

Install using Transformers pipeline and Transformers AutoModelForCausalLM

!pip install transformers
!pip install torch
!pip install accelerate

Transformers pipeline and

import transformers
import torch

model_id = "dad1909/CyberSentinel"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a chatbot who always responds for detect software vulnerable code!"},
    {"role": "user", "content": "what is Buffer overflow?"},
]

prompt = pipeline.tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True
)

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    prompt,
    max_new_tokens=256,
    eos_token_id=terminators
)
print(outputs[0]["generated_text"][len(prompt):])

Transformers AutoModelForCausalLM

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "dad1909/CyberSentinel"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
messages = [
    {"role": "system", "content": "You are a chatbot who always responds for detect software vulnerable code!"},
    {"role": "user", "content": "what is Buffer overflow?"},
]
input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)
terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
    input_ids,
    max_new_tokens=256,
    eos_token_id=terminators
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))

How to use

This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original llama3 codebase.

Use with transformers

You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the generate() function. Let's see examples of both.

Training Data

Overview cyberAI is pretrained from dad1909/DSV that data related to software vulnerability codes. The fine-tuning data includes publicly available instruction and output datasets.

Data Freshness The pretraining data is continuously updated with new vulnerability codes.

Downloads last month
4
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for dad1909/CyberSentinel

Finetuned
(441)
this model