rdpahalavan commited on
Commit
42eaff1
1 Parent(s): 89da5f2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +188 -19
README.md CHANGED
@@ -9,22 +9,191 @@ tags:
9
  - network intrusion detection
10
  - network packets
11
  - cybersecurity
12
- configs:
13
- - config_name: Network-Flows
14
- data_files:
15
- - split: train
16
- pattern: "Network-Flows/*.parquet"
17
- default: true
18
- - config_name: Packet-Fields
19
- data_files:
20
- - split: train
21
- pattern: "Packet-Fields/*.parquet"
22
- - config_name: Packet-Bytes
23
- data_files:
24
- - split: train
25
- pattern: "Packet-Bytes/*.parquet"
26
- - config_name: Payload-Bytes
27
- data_files:
28
- - split: train
29
- pattern: "Payload-Bytes/*.parquet"
30
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  - network intrusion detection
10
  - network packets
11
  - cybersecurity
12
+ - CIC-IDS2017
13
+ ---
14
+
15
+ We have developed a Python package as a wrapper around Hugging Face Hub and Hugging Face Datasets library to access this dataset easily.
16
+
17
+ # NIDS Datasets
18
+
19
+ The `nids-datasets` package provides functionality to download and utilize specially curated and extracted datasets from the original UNSW-NB15 and CIC-IDS2017 datasets. These datasets, which initially were only flow datasets, have been enhanced to include packet-level information from the raw PCAP files. The dataset contains both packet-level and flow-level data for over 230 million packets, with 179 million packets from UNSW-NB15 and 54 million packets from CIC-IDS2017.
20
+
21
+ ## Installation
22
+
23
+ Install the `nids-datasets` package using pip:
24
+
25
+ ```shell
26
+ pip install nids-datasets
27
+ ```
28
+
29
+ Import the package in your Python script:
30
+
31
+ ```python
32
+ from nids_datasets import Dataset, DatasetInfo
33
+ ```
34
+
35
+ ## Dataset Information
36
+
37
+ The `nids-datasets` package currently supports two datasets: [UNSW-NB15](https://research.unsw.edu.au/projects/unsw-nb15-dataset) and [CIC-IDS2017](https://www.unb.ca/cic/datasets/ids-2017.html). Each of these datasets contains a mix of normal traffic and different types of attack traffic, which are identified by their respective labels. The UNSW-NB15 dataset has 10 unique class labels, and the CIC-IDS2017 dataset has 24 unique class labels.
38
+
39
+ - UNSW-NB15 Labels: 'normal', 'exploits', 'dos', 'fuzzers', 'generic', 'reconnaissance', 'worms', 'shellcode', 'backdoor', 'analysis'
40
+ - CIC-IDS2017 Labels: 'BENIGN', 'FTP-Patator', 'SSH-Patator', 'DoS slowloris', 'DoS Slowhttptest', 'DoS Hulk', 'Heartbleed', 'Web Attack – Brute Force', 'Web Attack – XSS', 'Web Attack – SQL Injection', 'Infiltration', 'Bot', 'PortScan', 'DDoS', 'normal', 'exploits', 'dos', 'fuzzers', 'generic', 'reconnaissance', 'worms', 'shellcode', 'backdoor', 'analysis', 'DoS GoldenEye'
41
+
42
+ ## Subsets of the Dataset
43
+
44
+ Each dataset consists of four subsets:
45
+
46
+ 1. Network-Flows - Contains flow-level data.
47
+ 2. Packet-Fields - Contains packet header information.
48
+ 3. Packet-Bytes - Contains packet byte information in the range (0-255).
49
+ 4. Payload-Bytes - Contains payload byte information in the range (0-255).
50
+
51
+ Each subset contains 18 files (except Network-Flows, which has one file), where the data is stored in parquet format. In total, this package provides access to 110 files. You can choose to download all subsets or select specific subsets or specific files depending on your analysis requirements.
52
+
53
+ ## Getting Information on the Datasets
54
+
55
+ The `DatasetInfo` function provides a summary of the dataset in a pandas dataframe format. It displays the number of packets for each class label across all 18 files in the dataset. This overview can guide you in selecting specific files for download and analysis.
56
+
57
+ ```python
58
+ df = DatasetInfo(dataset='UNSW-NB15') # or dataset='CIC-IDS2017'
59
+ df
60
+ ```
61
+
62
+ ## Downloading the Datasets
63
+
64
+ The `Dataset` class allows you to specify the dataset, subset, and files that you are interested in. The specified data will then be downloaded.
65
+
66
+ ```python
67
+ dataset = 'UNSW-NB15' # or 'CIC-IDS2017'
68
+ subset = ['Network-Flows', 'Packet-Fields', 'Payload-Bytes'] # or 'all' for all subsets
69
+ files = [3, 5, 10] # or 'all' for all files
70
+
71
+ data = Dataset(dataset=dataset, subset=subset, files=files)
72
+ data.download()
73
+ ```
74
+
75
+ The directory structure after downloading files:
76
+
77
+ ```
78
+ UNSW-NB15
79
+
80
+ ├───Network-Flows
81
+ │ └───UNSW_Flow.parquet
82
+
83
+ ├───Packet-Fields
84
+ │ ├───Packet_Fields_File_3.parquet
85
+ │ ├───Packet_Fields_File_5.parquet
86
+ │ └───Packet_Fields_File_10.parquet
87
+
88
+ └───Payload-Bytes
89
+ ├───Payload_Bytes_File_3.parquet
90
+ ├───Payload_Bytes_File_5.parquet
91
+ └───Payload_Bytes_File_10.parquet
92
+ ```
93
+
94
+ You can then load the parquet files using pandas:
95
+
96
+ ```python
97
+ import pandas as pd
98
+ df = pd.read_parquet('UNSW-NB15/Packet-Fields/Packet_Fields_File_10.parquet')
99
+ ```
100
+
101
+ ## Merging Subsets
102
+
103
+ The `merge()` method allows you to merge all data of each packet across all subsets, providing both flow-level and packet-level information in a single file.
104
+
105
+ ```python
106
+ data.merge()
107
+ ```
108
+
109
+ The merge method, by default, uses the details specified while instantiating the `Dataset` class. You can also pass subset=list of subsets and files=list of files you want to merge.
110
+
111
+ The directory structure after merging files:
112
+
113
+ ```
114
+ UNSW-NB15
115
+
116
+ ├───Network-Flows
117
+ │ └───UNSW_Flow.parquet
118
+
119
+ ├───Packet-Fields
120
+ │ ├───Packet_Fields_File_3.parquet
121
+ │ ├───Packet_Fields_File_5.parquet
122
+ │ └───Packet_Fields_File_10.parquet
123
+
124
+ ├───Payload-Bytes
125
+ │ ├───Payload_Bytes_File_3.parquet
126
+ │ ├───Payload_Bytes_File_5.parquet
127
+ │ └───Payload_Bytes_File_10.parquet
128
+
129
+ └───Network-Flows+Packet-Fields+Payload-Bytes
130
+ ├───Network_Flows+Packet_Fields+Payload_Bytes_File_3.parquet
131
+ ├───Network_Flows+Packet_Fields+Payload_Bytes_File_5.parquet
132
+ └───Network_Flows+Packet_Fields+Payload_Bytes_File_10.parquet
133
+ ```
134
+
135
+ ## Extracting Bytes
136
+
137
+ Packet-Bytes and Payload-Bytes subset contains the first 1500-1600 bytes. To retrieve all bytes (up to 65535 bytes) from the Packet-Bytes and Payload-Bytes subsets, use the `Bytes()` method. This function requires files in the Packet-Fields subset to operate. You can specify how many bytes you want to extract by passing the max_bytes parameter.
138
+
139
+ ```python
140
+ data.bytes(payload=True, max_bytes=2500)
141
+ ```
142
+
143
+ Use packet=True to extract packet bytes. You can also pass files=list of files to retrieve bytes.
144
+
145
+ The directory structure after extracting bytes:
146
+
147
+ ```
148
+ UNSW-NB15
149
+
150
+ ├───Network-Flows
151
+ │ └───UNSW_Flow.parquet
152
+
153
+ ├───Packet-Fields
154
+ │ ├───Packet_Fields_File_3.parquet
155
+ │ ├───Packet_Fields_File_5.parquet
156
+ │ └───Packet_Fields_File_10.parquet
157
+
158
+ ├───Payload-Bytes
159
+ │ ├───Payload_Bytes_File_3.parquet
160
+ │ ├───Payload_Bytes_File_5.parquet
161
+ │ └───Payload_Bytes_File_10.parquet
162
+
163
+ ├───Network-Flows+Packet-Fields+Payload-Bytes
164
+ │ ├───Network_Flows+Packet_Fields+Payload_Bytes_File_3.parquet
165
+ │ ├───Network_Flows+Packet_Fields+Payload_Bytes_File_5.parquet
166
+ │ └───Network_Flows+Packet_Fields+Payload_Bytes_File_10.parquet
167
+
168
+ └───Payload-Bytes-2500
169
+ ├───Payload_Bytes_File_3.parquet
170
+ ├───Payload_Bytes_File_5.parquet
171
+ └───Payload_Bytes_File_10.parquet
172
+ ```
173
+
174
+ ## Reading the Datasets
175
+
176
+ The `read()` method allows you to read files using Hugging Face's `load_dataset` method, one subset at a time. The dataset and files parameters are optional if the same details are used to instantiate the `Dataset` class.
177
+
178
+ ```python
179
+ dataset = data.read(dataset='UNSW-NB15', subset='Packet-Fields', files=[1,2])
180
+ ```
181
+
182
+ The `read()` method returns a dataset that you can convert to a pandas dataframe or save to a CSV, parquet, or any other desired file format:
183
+
184
+ ```python
185
+ df = dataset.to_pandas()
186
+ dataset.to_csv('file_path_to_save.csv')
187
+ dataset.to_parquet('file_path_to_save.parquet')
188
+ ```
189
+
190
+ For scenarios where you want to process one packet at a time, you can use the `stream=True` parameter:
191
+
192
+ ```python
193
+ dataset = data.read(dataset='UNSW-NB15', subset='Packet-Fields', files=[1,2], stream=True)
194
+ print(next(iter(dataset)))
195
+ ```
196
+
197
+ ## Notes
198
+
199
+ The size of these datasets is large, and depending on the subset(s) selected and the number of bytes extracted, the operations can be resource-intensive. Therefore, it's recommended to ensure you have sufficient disk space and RAM when using this package.