Daniil Larionov
commited on
Commit
•
59adbd3
1
Parent(s):
3d62abc
update model card README.md
Browse files
README.md
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- generated_from_trainer
|
4 |
+
model-index:
|
5 |
+
- name: rubert-base-srl-seqlabeling
|
6 |
+
results: []
|
7 |
+
---
|
8 |
+
|
9 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
10 |
+
should probably proofread and complete it, then remove this comment. -->
|
11 |
+
|
12 |
+
# rubert-base-srl-seqlabeling
|
13 |
+
|
14 |
+
This model is a fine-tuned version of [./ruBert-base/](https://huggingface.co/./ruBert-base/) on an unknown dataset.
|
15 |
+
It achieves the following results on the evaluation set:
|
16 |
+
- Loss: 0.2417
|
17 |
+
- Predicate Precision: 0.9323
|
18 |
+
- Predicate Recall: 0.9612
|
19 |
+
- Predicate F1: 0.9466
|
20 |
+
- Predicate Number: 129
|
21 |
+
- Инструмент Precision: 0.0
|
22 |
+
- Инструмент Recall: 0.0
|
23 |
+
- Инструмент F1: 0.0
|
24 |
+
- Инструмент Number: 1
|
25 |
+
- Каузатор Precision: 0.7667
|
26 |
+
- Каузатор Recall: 0.6301
|
27 |
+
- Каузатор F1: 0.6917
|
28 |
+
- Каузатор Number: 73
|
29 |
+
- Экспериенцер Precision: 0.6939
|
30 |
+
- Экспериенцер Recall: 0.8293
|
31 |
+
- Экспериенцер F1: 0.7556
|
32 |
+
- Экспериенцер Number: 41
|
33 |
+
- Overall Precision: 0.8430
|
34 |
+
- Overall Recall: 0.8361
|
35 |
+
- Overall F1: 0.8395
|
36 |
+
- Overall Accuracy: 0.9584
|
37 |
+
|
38 |
+
## Model description
|
39 |
+
|
40 |
+
More information needed
|
41 |
+
|
42 |
+
## Intended uses & limitations
|
43 |
+
|
44 |
+
More information needed
|
45 |
+
|
46 |
+
## Training and evaluation data
|
47 |
+
|
48 |
+
More information needed
|
49 |
+
|
50 |
+
## Training procedure
|
51 |
+
|
52 |
+
### Training hyperparameters
|
53 |
+
|
54 |
+
The following hyperparameters were used during training:
|
55 |
+
- learning_rate: 5e-05
|
56 |
+
- train_batch_size: 16
|
57 |
+
- eval_batch_size: 8
|
58 |
+
- seed: 42
|
59 |
+
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
|
60 |
+
- lr_scheduler_type: cosine
|
61 |
+
- lr_scheduler_warmup_ratio: 0.06
|
62 |
+
- num_epochs: 10.0
|
63 |
+
|
64 |
+
### Training results
|
65 |
+
|
66 |
+
| Training Loss | Epoch | Step | Validation Loss | Predicate Precision | Predicate Recall | Predicate F1 | Predicate Number | Инструмент Precision | Инструмент Recall | Инструмент F1 | Инструмент Number | Каузатор Precision | Каузатор Recall | Каузатор F1 | Каузатор Number | Экспериенцер Precision | Экспериенцер Recall | Экспериенцер F1 | Экспериенцер Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|
67 |
+
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:------------------:|:---------------:|:-----------:|:---------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
|
68 |
+
| 0.2462 | 1.0 | 54 | 0.1554 | 0.8897 | 1.0 | 0.9416 | 129 | 0.0 | 0.0 | 0.0 | 1 | 0.7708 | 0.5068 | 0.6116 | 73 | 0.6047 | 0.6341 | 0.6190 | 41 | 0.8136 | 0.7869 | 0.8 | 0.9486 |
|
69 |
+
| 0.1863 | 2.0 | 108 | 0.1268 | 0.9014 | 0.9922 | 0.9446 | 129 | 0.0 | 0.0 | 0.0 | 1 | 0.8444 | 0.5205 | 0.6441 | 73 | 0.6829 | 0.6829 | 0.6829 | 41 | 0.8509 | 0.7951 | 0.8220 | 0.9557 |
|
70 |
+
| 0.0668 | 3.0 | 162 | 0.1288 | 0.9338 | 0.9845 | 0.9585 | 129 | 0.0 | 0.0 | 0.0 | 1 | 0.8148 | 0.6027 | 0.6929 | 73 | 0.6957 | 0.7805 | 0.7356 | 41 | 0.8602 | 0.8320 | 0.8458 | 0.9600 |
|
71 |
+
| 0.039 | 4.0 | 216 | 0.1695 | 0.9007 | 0.9845 | 0.9407 | 129 | 0.0 | 0.0 | 0.0 | 1 | 0.8298 | 0.5342 | 0.6500 | 73 | 0.6441 | 0.9268 | 0.76 | 41 | 0.8259 | 0.8361 | 0.8310 | 0.9557 |
|
72 |
+
| 0.0187 | 5.0 | 270 | 0.1955 | 0.9323 | 0.9612 | 0.9466 | 129 | 0.0 | 0.0 | 0.0 | 1 | 0.75 | 0.5753 | 0.6512 | 73 | 0.7105 | 0.6585 | 0.6835 | 41 | 0.8502 | 0.7910 | 0.8195 | 0.9551 |
|
73 |
+
| 0.0216 | 6.0 | 324 | 0.2083 | 0.9394 | 0.9612 | 0.9502 | 129 | 0.0 | 0.0 | 0.0 | 1 | 0.7586 | 0.6027 | 0.6718 | 73 | 0.6829 | 0.6829 | 0.6829 | 41 | 0.8485 | 0.8033 | 0.8253 | 0.9562 |
|
74 |
+
| 0.0176 | 7.0 | 378 | 0.2203 | 0.9323 | 0.9612 | 0.9466 | 129 | 0.0 | 0.0 | 0.0 | 1 | 0.7273 | 0.6575 | 0.6906 | 73 | 0.68 | 0.8293 | 0.7473 | 41 | 0.8273 | 0.8443 | 0.8357 | 0.9578 |
|
75 |
+
| 0.0037 | 8.0 | 432 | 0.2313 | 0.9323 | 0.9612 | 0.9466 | 129 | 0.0 | 0.0 | 0.0 | 1 | 0.7541 | 0.6301 | 0.6866 | 73 | 0.6809 | 0.7805 | 0.7273 | 41 | 0.8382 | 0.8279 | 0.8330 | 0.9567 |
|
76 |
+
| 0.0089 | 9.0 | 486 | 0.2409 | 0.9323 | 0.9612 | 0.9466 | 129 | 0.0 | 0.0 | 0.0 | 1 | 0.7705 | 0.6438 | 0.7015 | 73 | 0.6939 | 0.8293 | 0.7556 | 41 | 0.8436 | 0.8402 | 0.8419 | 0.9589 |
|
77 |
+
| 0.0043 | 10.0 | 540 | 0.2417 | 0.9323 | 0.9612 | 0.9466 | 129 | 0.0 | 0.0 | 0.0 | 1 | 0.7667 | 0.6301 | 0.6917 | 73 | 0.6939 | 0.8293 | 0.7556 | 41 | 0.8430 | 0.8361 | 0.8395 | 0.9584 |
|
78 |
+
|
79 |
+
|
80 |
+
### Framework versions
|
81 |
+
|
82 |
+
- Transformers 4.13.0.dev0
|
83 |
+
- Pytorch 1.10.0+cu102
|
84 |
+
- Datasets 1.15.1
|
85 |
+
- Tokenizers 0.10.3
|