--- license: mit base_model: dathi103/bert-job-german tags: - generated_from_trainer model-index: - name: gerskill-bert-job results: [] --- # gerskill-bert-job This model is a fine-tuned version of [dathi103/bert-job-german](https://huggingface.co/dathi103/bert-job-german) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1565 - Hard: {'precision': 0.683698296836983, 'recall': 0.771978021978022, 'f1': 0.7251612903225807, 'number': 364} - Soft: {'precision': 0.68, 'recall': 0.7727272727272727, 'f1': 0.7234042553191491, 'number': 66} - Overall Precision: 0.6831 - Overall Recall: 0.7721 - Overall F1: 0.7249 - Overall Accuracy: 0.9584 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Hard | Soft | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | No log | 1.0 | 178 | 0.1225 | {'precision': 0.5902439024390244, 'recall': 0.6648351648351648, 'f1': 0.6253229974160206, 'number': 364} | {'precision': 0.625, 'recall': 0.6060606060606061, 'f1': 0.6153846153846154, 'number': 66} | 0.5949 | 0.6558 | 0.6239 | 0.9537 | | No log | 2.0 | 356 | 0.1176 | {'precision': 0.6282973621103117, 'recall': 0.7197802197802198, 'f1': 0.6709346991037132, 'number': 364} | {'precision': 0.6351351351351351, 'recall': 0.7121212121212122, 'f1': 0.6714285714285715, 'number': 66} | 0.6293 | 0.7186 | 0.6710 | 0.9563 | | 0.1349 | 3.0 | 534 | 0.1361 | {'precision': 0.6747572815533981, 'recall': 0.7637362637362637, 'f1': 0.7164948453608249, 'number': 364} | {'precision': 0.620253164556962, 'recall': 0.7424242424242424, 'f1': 0.6758620689655171, 'number': 66} | 0.6660 | 0.7605 | 0.7101 | 0.9591 | | 0.1349 | 4.0 | 712 | 0.1499 | {'precision': 0.672289156626506, 'recall': 0.7664835164835165, 'f1': 0.7163029525032093, 'number': 364} | {'precision': 0.6944444444444444, 'recall': 0.7575757575757576, 'f1': 0.7246376811594203, 'number': 66} | 0.6756 | 0.7651 | 0.7176 | 0.9587 | | 0.1349 | 5.0 | 890 | 0.1565 | {'precision': 0.683698296836983, 'recall': 0.771978021978022, 'f1': 0.7251612903225807, 'number': 364} | {'precision': 0.68, 'recall': 0.7727272727272727, 'f1': 0.7234042553191491, 'number': 66} | 0.6831 | 0.7721 | 0.7249 | 0.9584 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2