Daniil Larionov
update model card README.md
59adbd3
|
raw
history blame
7.08 kB
metadata
tags:
  - generated_from_trainer
model-index:
  - name: rubert-base-srl-seqlabeling
    results: []

rubert-base-srl-seqlabeling

This model is a fine-tuned version of ./ruBert-base/ on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2417
  • Predicate Precision: 0.9323
  • Predicate Recall: 0.9612
  • Predicate F1: 0.9466
  • Predicate Number: 129
  • Инструмент Precision: 0.0
  • Инструмент Recall: 0.0
  • Инструмент F1: 0.0
  • Инструмент Number: 1
  • Каузатор Precision: 0.7667
  • Каузатор Recall: 0.6301
  • Каузатор F1: 0.6917
  • Каузатор Number: 73
  • Экспериенцер Precision: 0.6939
  • Экспериенцер Recall: 0.8293
  • Экспериенцер F1: 0.7556
  • Экспериенцер Number: 41
  • Overall Precision: 0.8430
  • Overall Recall: 0.8361
  • Overall F1: 0.8395
  • Overall Accuracy: 0.9584

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.06
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Predicate Precision Predicate Recall Predicate F1 Predicate Number Инструмент Precision Инструмент Recall Инструмент F1 Инструмент Number Каузатор Precision Каузатор Recall Каузатор F1 Каузатор Number Экспериенцер Precision Экспериенцер Recall Экспериенцер F1 Экспериенцер Number Overall Precision Overall Recall Overall F1 Overall Accuracy
0.2462 1.0 54 0.1554 0.8897 1.0 0.9416 129 0.0 0.0 0.0 1 0.7708 0.5068 0.6116 73 0.6047 0.6341 0.6190 41 0.8136 0.7869 0.8 0.9486
0.1863 2.0 108 0.1268 0.9014 0.9922 0.9446 129 0.0 0.0 0.0 1 0.8444 0.5205 0.6441 73 0.6829 0.6829 0.6829 41 0.8509 0.7951 0.8220 0.9557
0.0668 3.0 162 0.1288 0.9338 0.9845 0.9585 129 0.0 0.0 0.0 1 0.8148 0.6027 0.6929 73 0.6957 0.7805 0.7356 41 0.8602 0.8320 0.8458 0.9600
0.039 4.0 216 0.1695 0.9007 0.9845 0.9407 129 0.0 0.0 0.0 1 0.8298 0.5342 0.6500 73 0.6441 0.9268 0.76 41 0.8259 0.8361 0.8310 0.9557
0.0187 5.0 270 0.1955 0.9323 0.9612 0.9466 129 0.0 0.0 0.0 1 0.75 0.5753 0.6512 73 0.7105 0.6585 0.6835 41 0.8502 0.7910 0.8195 0.9551
0.0216 6.0 324 0.2083 0.9394 0.9612 0.9502 129 0.0 0.0 0.0 1 0.7586 0.6027 0.6718 73 0.6829 0.6829 0.6829 41 0.8485 0.8033 0.8253 0.9562
0.0176 7.0 378 0.2203 0.9323 0.9612 0.9466 129 0.0 0.0 0.0 1 0.7273 0.6575 0.6906 73 0.68 0.8293 0.7473 41 0.8273 0.8443 0.8357 0.9578
0.0037 8.0 432 0.2313 0.9323 0.9612 0.9466 129 0.0 0.0 0.0 1 0.7541 0.6301 0.6866 73 0.6809 0.7805 0.7273 41 0.8382 0.8279 0.8330 0.9567
0.0089 9.0 486 0.2409 0.9323 0.9612 0.9466 129 0.0 0.0 0.0 1 0.7705 0.6438 0.7015 73 0.6939 0.8293 0.7556 41 0.8436 0.8402 0.8419 0.9589
0.0043 10.0 540 0.2417 0.9323 0.9612 0.9466 129 0.0 0.0 0.0 1 0.7667 0.6301 0.6917 73 0.6939 0.8293 0.7556 41 0.8430 0.8361 0.8395 0.9584

Framework versions

  • Transformers 4.13.0.dev0
  • Pytorch 1.10.0+cu102
  • Datasets 1.15.1
  • Tokenizers 0.10.3