Model Card for BERT4RE
Model Details
Model Description
BERT4RE is a domain-specific language model (LM) designed to support various requirements engineering (RE) tasks, including requirements classification, detection of language issues, identification of domain concepts, and establishment of requirements traceability links. BERT4RE is retrained from the generic BERTbase model using publicly available RE-related texts.
- Developed by: Muideen Ajagbe, Liping Zhao
- Shared by [optional]: Alberto Rodriguez
- Model type: Domain-specific language model
- Language(s) (NLP): English
- License: MIT
- Finetuned from model: BERTbase
Model Sources
- Paper [optional]: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9920081
Uses
Direct Use
BERT4RE can be used directly for various RE tasks such as requirements classification, detection of language issues, and identification of domain concepts from requirements text.
Downstream Use
BERT4RE can be fine-tuned for specific RE tasks, providing enhanced performance over generic models like BERTbase. An example is its application in a multiclass classification task to identify nine different requirements concepts.
Out-of-Scope Use
The model may not perform well for tasks outside the domain of requirements engineering or on datasets with significantly different characteristics from the training data.
Bias, Risks, and Limitations
Recommendations
Users should be aware of the biases and limitations inherent in the model due to the nature of the training data. Further evaluation and fine-tuning may be necessary for specific applications.
Training Details
Training Data
The training data consists of four requirements-related datasets:
- PROMISE NFR Dataset: Contains 625 requirements from 15 software development projects.
- PURE Dataset: Contains 522,444 lexical words and 865,551 tokens, with 29,000 unique words extracted.
- App Review Datasets: Contains four million app reviews with over three million unique words.
- Google Playstore App Reviews: Contains 600,000 app reviews with more than two million unique words.
Training Procedure
The retraining process follows the procedures used to train the original BERTbase model, involving the Masking LM (MLM) and Next Sentence Prediction (NSP) tasks. The model was retrained on Google Cloud TPU v3-8 with the following hyperparameters:
- Batch size: 128 sequences
- Tokens per batch: 3200 tokens (128 * 25 subword tokens)
- Total sequence blocks: 1.3M
- Epochs: 40
- Optimization: Adam with a learning rate of 1e-4, a weight decay of 0.01, and a dropout rate of 0.1
Evaluation
Please see cited paper for evaluation details.
Environmental Impact
- Hardware Type: Google Cloud TPU v3-8
- Cloud Provider: Google Cloud
Technical Specifications [optional]
Model Architecture and Objective
BERT4RE is based on the BERTbase architecture with 12 Transformer blocks, 768 hidden size, and 12 self-attention heads.
Compute Infrastructure
Hardware
Google Cloud TPU v3-8 with a single 8-core processor.
Citation [optional]
BibTeX:
@INPROCEEDINGS{9920081, author={Ajagbe, Muideen and Zhao, Liping}, booktitle={2022 IEEE 30th International Requirements Engineering Conference (RE)}, title={Retraining a BERT Model for Transfer Learning in Requirements Engineering: A Preliminary Study}, year={2022}, volume={}, number={}, pages={309-315}, keywords={Deep learning;Bit error rate;Transfer learning;Natural language processing;Requirements engineering;Task analysis;Requirements Engineering;Requirements Classification;Language Models;BERT;Domain-Specific Language Models;Transfer Learning;Deep Learning;Machine Learning;Natural Language Processing}, doi={10.1109/RE54965.2022.00046}}
Model Card Authors [optional]
Muideen Ajagbe, Liping Zhao
Model Card Contact
Alberto Rodriguez ([email protected])
- Downloads last month
- 92