File size: 5,352 Bytes
b791514
531435f
a3b6a48
 
 
 
531435f
a3b6a48
 
531435f
 
 
 
 
d252cd0
 
 
 
 
531435f
 
d252cd0
 
 
 
 
 
 
 
531435f
d252cd0
 
 
531435f
 
d252cd0
 
531435f
d252cd0
 
531435f
 
 
 
d252cd0
 
 
 
 
 
531435f
d252cd0
531435f
d252cd0
531435f
d252cd0
 
 
531435f
d252cd0
531435f
d252cd0
 
 
 
 
531435f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d252cd0
 
 
 
 
 
 
 
 
 
 
 
 
531435f
d252cd0
 
 
531435f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
---
license: apache-2.0
language:
- en
tags:
- Cybersecurity
- Cyber Security
- Information Security
- Computer Science
- Cyber Threats
- Vulnerabilities
- Vulnerability
- Malware
- Attacks
---
# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

CySecBERT is a domain-adapted version of the BERT model tailored for cybersecurity tasks.
It is based on a [Cybersecurity Dataset](https://github.com/PEASEC/cybersecurity_dataset) consisting of 4.3 million entries of Twitter, Blogs, Paper, and CVEs related to the cybersecurity domain.

# Model Details

- **Developed by:** Markus Bayer, Philipp Kuehn, Ramin Shanehsaz, and Christian Reuter
- **Model type:** BERT-base
- **Language(s) (NLP):** English
- **Finetuned from model:** bert-base-uncased.

## Model Sources

<!-- Provide the basic links for the model. -->

- **Repository:** https://github.com/PEASEC/CySecBERT
- **Paper:** https://dl.acm.org/doi/abs/10.1145/3652594 and https://arxiv.org/abs/2212.02974


# Bias, Risks, Limitations, and Recommendations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->
We would like to emphasise that we did not explicitly focus on and analyse social biases in the data or the resulting model.
While this may not be so damaging for most application contexts, there are certainly applications that depend heavily on these biases, and including any kind of discrimination can have serious consequences.
As authors, we would like to express our warnings regarding the use of the model in such contexts.
Nonetheless, we aim for an open source mentality, observing the great impact it can have, and therefore transfer the thinking to the user of the model, drawing on the many previous discussions in the open source community. 


# Training Details

## Training Data

See https://github.com/PEASEC/cybersecurity_dataset

## Training Procedure

We have specifically trained CySecBERT not to be affected too much by catastrophic forgetting. More details can be found in the paper.

# Evaluation

We have performed many different cybersecurity and general evaluations. The details can be found in the paper.

# Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

@article{10.1145/3652594,
author = {Bayer, Markus and Kuehn, Philipp and Shanehsaz, Ramin and Reuter, Christian},
title = {CySecBERT: A Domain-Adapted Language Model for the Cybersecurity Domain},
year = {2024},
issue_date = {May 2024},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {27},
number = {2},
issn = {2471-2566},
url = {https://doi.org/10.1145/3652594},
doi = {10.1145/3652594},
abstract = {The field of cysec is evolving fast. Security professionals are in need of intelligence on past, current and —ideally — upcoming threats, because attacks are becoming more advanced and are increasingly targeting larger and more complex systems. Since the processing and analysis of such large amounts of information cannot be addressed manually, cysec experts rely on machine learning techniques. In the textual domain, pre-trained language models such as Bidirectional Encoder Representations from Transformers (BERT) have proven to be helpful as they provide a good baseline for further fine-tuning. However, due to the domain-knowledge and the many technical terms in cysec, general language models might miss the gist of textual information. For this reason, we create a high-quality dataset1 and present a language model2 specifically tailored to the cysec domain that can serve as a basic building block for cybersecurity systems. The model is compared on 15 tasks: Domain-dependent extrinsic tasks for measuring the performance on specific problems, intrinsic tasks for measuring the performance of the internal representations of the model, as well as general tasks from the SuperGLUE benchmark. The results of the intrinsic tasks show that our model improves the internal representation space of domain words compared with the other models. The extrinsic, domain-dependent tasks, consisting of sequence tagging and classification, show that the model performs best in cybersecurity scenarios. In addition, we pay special attention to the choice of hyperparameters against catastrophic forgetting, as pre-trained models tend to forget the original knowledge during further training.},
journal = {ACM Trans. Priv. Secur.},
month = {apr},
articleno = {18},
numpages = {20},
keywords = {Language model, cybersecurity BERT, cybersecurity dataset}
}

or

@misc{https://doi.org/10.48550/arxiv.2212.02974,
  doi = {10.48550/ARXIV.2212.02974},
  url = {https://arxiv.org/abs/2212.02974},
  author = {Bayer, Markus and Kuehn, Philipp and Shanehsaz, Ramin and Reuter, Christian},
  keywords = {Cryptography and Security (cs.CR), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {CySecBERT: A Domain-Adapted Language Model for the Cybersecurity Domain},
  publisher = {arXiv},
  year = {2022},
  copyright = {arXiv.org perpetual, non-exclusive license}
}

# Model Card Authors [optional]

Markus Bayer, Philipp Kuehn, Ramin Shanehsaz, Christian Reuter

# Model Card Contact

[email protected]