Edit model card

Built with Axolotl

See axolotl config

axolotl version: 0.4.0

base_model: mistralai/Mistral-7B-v0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer

load_in_8bit: false
load_in_4bit: true
strict: false

datasets:
  - path: caffeinatedcherrychic/cidds-agg-balanced
    type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.1
output_dir: ./qlora-out

adapter: qlora
lora_model_dir:

sequence_len: 256
sample_packing: false
pad_to_sequence_len: true

lora_r: 32
lora_alpha: 64
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
  - gate_proj
  - down_proj
  - up_proj
  - q_proj
  - v_proj
  - k_proj
  - o_proj

wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 5
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002

train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3

max_steps: 500
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 1
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.001
fsdp:
fsdp_config:
special_tokens:


qlora-out

This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the CIDDS dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1465

Mistral based NIDS

This repository contains an implementation of a Network Intrusion Detection System (NIDS) based on the Mistral Large Language Model (LLM). The system is designed to detect and classify network attacks using natural language processing techniques.

Overview

  • LLM:
    • The NIDS is built using the Mistral LLM, a powerful language model that enables the system to understand and analyze network traffic logs.
    • Another LLM, Llama2, was fine-tuned and the performance of the two were compared. The link to my implementation of Llama2-based can be found here.
  • Dataset: The system is trained and evaluated on the CIDDS dataset, which includes various types of network attacks such as DoS, PortScan, Brute Force, and PingScan.
  • Training: The LLM is fine-tuned on the CIDDS dataset after it was pre-processed using the NTFA tool to learn the patterns and characteristics of different network attacks.
  • Inference: The trained model is used to classify network traffic logs in real-time, identifying potential attacks and generating alerts.

Results

The mistral-based NIDS achieves a higher detection rate with lower false positives, demonstrating the effectiveness of using LLMs for network intrusion detection. With access to computational resources for longer periods, It's performance could further be improved.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • training_steps: 62

Training results

Training Loss Epoch Step Validation Loss
6.6367 0.08 1 7.3009
2.3866 0.32 4 0.7138
0.948 0.64 8 1.0446
0.6822 0.96 12 1.3960
0.5222 1.28 16 0.9023
0.534 1.6 20 0.4847
0.4624 1.92 24 0.5740
0.7753 2.24 28 0.3772
0.3324 2.56 32 0.2937
0.1973 2.88 36 0.5675
0.0843 3.2 40 0.2360
0.3836 3.52 44 0.1397
0.0449 3.84 48 0.2801
0.2246 4.16 52 0.1946
0.229 4.48 56 0.1618
0.3073 4.8 60 0.1465

Framework versions

  • PEFT 0.10.1.dev0
  • Transformers 4.39.0.dev0
  • Pytorch 2.1.2
  • Datasets 2.18.0
  • Tokenizers 0.15.0
Downloads last month
10
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for caffeinatedcherrychic/mistral-based-NIDS

Adapter
(1171)
this model