req_mod_ner_modelv2
This model is a fine-tuned version of pdelobelle/robbert-v2-dutch-ner on a private dataset with 300 sentences/phrases with 1,954 token labels (IOB2 format) aimed at extracting software requirements related named entities in Dutch. The following labels are used:
- Actor (used for all types of software users and groups of users)
- COTS (abbreviation for Commercial Off-The-Shelf Software)
- Function (used for functions, functionality, features)
- Result (used for system result, goals and system output)
- Entity (used for all entities stored/processed by the software)
- Attribute (used for attributes of entities)
Please contact me via LinkedIn if you have any questions about this model or the dataset used.
The dataset and this model were created as part of the final project assignment of the Natural Language Understanding course (XCS224U) from the Professional AI Program of the Stanford School of Engineering.
The model achieves the following results on the evaluation set:
- Loss: 0.6791
- Precision: 0.7515
- Recall: 0.7299
- F1: 0.7405
- Accuracy: 0.9253
Metrics per named-entity
NER-tag | Precision | Recall | F1 | Support |
---|---|---|---|---|
Actor | 0.86 | 1.00 | 0.92 | 12 |
COTS | 0.79 | 0.79 | 0.79 | 24 |
Function | 0.73 | 0.66 | 0.69 | 62 |
Result | 0.29 | 0.40 | 0.33 | 10 |
Entity | 0.78 | 0.83 | 0.81 | 35 |
Attribute | 0.92 | 0.71 | 0.80 | 31 |
Intended uses & limitations
The model performs automated extraction of functionality concepts from source documents for which software requirements are needed. Its intended use is as a preceding processing step for Question-Answering.
Training and evaluation data
The model was trained on the ReqModNer dataset. This dataset is private and contains 300 sentences/phrases and 1,954 IOB2 labels. The dataset is split 240/30/30 into train, validation and test. The reported metrics are from the evaluation on the test set. The validation set was used for cross-validation during training.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
No log | 1.0 | 270 | 0.5418 | 0.6065 | 0.5402 | 0.5714 | 0.8802 |
0.5551 | 2.0 | 540 | 0.4299 | 0.5481 | 0.6552 | 0.5969 | 0.8896 |
0.5551 | 3.0 | 810 | 0.4987 | 0.6358 | 0.5517 | 0.5908 | 0.9020 |
0.1935 | 4.0 | 1080 | 0.5620 | 0.6159 | 0.4885 | 0.5449 | 0.8935 |
0.1935 | 5.0 | 1350 | 0.4922 | 0.6786 | 0.6552 | 0.6667 | 0.9121 |
0.0913 | 6.0 | 1620 | 0.5406 | 0.6087 | 0.5632 | 0.5851 | 0.8950 |
0.0913 | 7.0 | 1890 | 0.6307 | 0.7425 | 0.7126 | 0.7273 | 0.9222 |
0.0702 | 8.0 | 2160 | 0.4425 | 0.6684 | 0.7414 | 0.7030 | 0.9277 |
0.0702 | 9.0 | 2430 | 0.6028 | 0.7158 | 0.7529 | 0.7339 | 0.9285 |
0.0472 | 10.0 | 2700 | 0.6491 | 0.7303 | 0.7471 | 0.7386 | 0.9246 |
0.0472 | 11.0 | 2970 | 0.6442 | 0.7198 | 0.7529 | 0.7360 | 0.9292 |
0.0305 | 12.0 | 3240 | 0.5980 | 0.7412 | 0.7241 | 0.7326 | 0.9230 |
0.0209 | 13.0 | 3510 | 0.6186 | 0.7232 | 0.7356 | 0.7293 | 0.9238 |
0.0209 | 14.0 | 3780 | 0.6791 | 0.7515 | 0.7299 | 0.7405 | 0.9253 |
0.0148 | 15.0 | 4050 | 0.6832 | 0.7283 | 0.7241 | 0.7262 | 0.9238 |
0.0148 | 16.0 | 4320 | 0.6908 | 0.7412 | 0.7241 | 0.7326 | 0.9238 |
Framework versions
- Transformers 4.24.0
- Pytorch 2.0.0
- Datasets 2.9.0
- Tokenizers 0.11.0
- Downloads last month
- 7