File size: 9,194 Bytes
91030e4 251f723 91030e4 251f723 91030e4 251f723 91030e4 251f723 91030e4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
---
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
- generated_from_trainer
datasets:
- essays_su_g
metrics:
- accuracy
model-index:
- name: longformer-sep_tok
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: essays_su_g
type: essays_su_g
config: sep_tok
split: train[80%:100%]
args: sep_tok
metrics:
- name: Accuracy
type: accuracy
value: 0.8963474162598889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer-sep_tok
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2715
- Claim: {'precision': 0.6254587155963303, 'recall': 0.6542706333973128, 'f1-score': 0.6395403377110694, 'support': 4168.0}
- Majorclaim: {'precision': 0.8875878220140515, 'recall': 0.8805762081784386, 'f1-score': 0.8840681128994634, 'support': 2152.0}
- O: {'precision': 0.9999116061168567, 'recall': 1.0, 'f1-score': 0.9999558011049724, 'support': 11312.0}
- Premise: {'precision': 0.8987139615028998, 'recall': 0.8856125238134681, 'f1-score': 0.8921151439299123, 'support': 12073.0}
- Accuracy: 0.8963
- Macro avg: {'precision': 0.8529180263075345, 'recall': 0.8551148413473049, 'f1-score': 0.8539198489113544, 'support': 29705.0}
- Weighted avg: {'precision': 0.8981038432990452, 'recall': 0.8963474162598889, 'f1-score': 0.8971595644270212, 'support': 29705.0}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
| No log | 1.0 | 41 | 0.3907 | {'precision': 0.4622047244094488, 'recall': 0.28166986564299423, 'f1-score': 0.3500298151460942, 'support': 4168.0} | {'precision': 0.6884593519044911, 'recall': 0.5627323420074349, 'f1-score': 0.6192789567885452, 'support': 2152.0} | {'precision': 0.9988159213043082, 'recall': 0.9694130127298444, 'f1-score': 0.9838948454533218, 'support': 11312.0} | {'precision': 0.8006515561100714, 'recall': 0.9567630249316657, 'f1-score': 0.8717735849056604, 'support': 12073.0} | 0.8383 | {'precision': 0.7375328884320799, 'recall': 0.6926445613279848, 'f1-score': 0.7062443005734054, 'support': 29705.0} | {'precision': 0.8204984263709232, 'recall': 0.838310048813331, 'f1-score': 0.8229710003996594, 'support': 29705.0} |
| No log | 2.0 | 82 | 0.2736 | {'precision': 0.6199246546672248, 'recall': 0.3553262955854127, 'f1-score': 0.45173097453103556, 'support': 4168.0} | {'precision': 0.7746005046257359, 'recall': 0.8559479553903345, 'f1-score': 0.8132450331125828, 'support': 2152.0} | {'precision': 0.9999114103472715, 'recall': 0.9977899575671852, 'f1-score': 0.9988495575221238, 'support': 11312.0} | {'precision': 0.8387545787545787, 'recall': 0.9483144206079682, 'f1-score': 0.8901761069859658, 'support': 12073.0} | 0.8773 | {'precision': 0.8082977870987028, 'recall': 0.7893446572877252, 'f1-score': 0.7885004180379269, 'support': 29705.0} | {'precision': 0.8647725349186985, 'recall': 0.87725972058576, 'f1-score': 0.8644672730999988, 'support': 29705.0} |
| No log | 3.0 | 123 | 0.2407 | {'precision': 0.6129186602870813, 'recall': 0.6146833013435701, 'f1-score': 0.6137997125059894, 'support': 4168.0} | {'precision': 0.7996618765849535, 'recall': 0.879182156133829, 'f1-score': 0.8375387339530764, 'support': 2152.0} | {'precision': 0.9999115904871364, 'recall': 0.9998231966053748, 'f1-score': 0.9998673915926268, 'support': 11312.0} | {'precision': 0.9024307900067522, 'recall': 0.8856125238134681, 'f1-score': 0.8939425609297271, 'support': 12073.0} | 0.8906 | {'precision': 0.8287307293414808, 'recall': 0.8448252944740605, 'f1-score': 0.836287099745355, 'support': 29705.0} | {'precision': 0.8914850757054159, 'recall': 0.890624473994277, 'f1-score': 0.8908860134318254, 'support': 29705.0} |
| No log | 4.0 | 164 | 0.2498 | {'precision': 0.6335050149091895, 'recall': 0.560700575815739, 'f1-score': 0.5948835433371515, 'support': 4168.0} | {'precision': 0.8946840521564694, 'recall': 0.828996282527881, 'f1-score': 0.8605885190545105, 'support': 2152.0} | {'precision': 0.9999116061168567, 'recall': 1.0, 'f1-score': 0.9999558011049724, 'support': 11312.0} | {'precision': 0.872137855063341, 'recall': 0.9180816698417957, 'f1-score': 0.894520216286014, 'support': 12073.0} | 0.8927 | {'precision': 0.8500596320614642, 'recall': 0.8269446320463539, 'f1-score': 0.8374870199456621, 'support': 29705.0} | {'precision': 0.8889456116800479, 'recall': 0.8926780003366437, 'f1-score': 0.890170129437975, 'support': 29705.0} |
| No log | 5.0 | 205 | 0.2543 | {'precision': 0.6193029490616622, 'recall': 0.6650671785028791, 'f1-score': 0.6413697362332255, 'support': 4168.0} | {'precision': 0.8613728129205922, 'recall': 0.8921933085501859, 'f1-score': 0.8765122118237845, 'support': 2152.0} | {'precision': 0.9999115592111082, 'recall': 0.9994695898161244, 'f1-score': 0.9996905256642645, 'support': 11312.0} | {'precision': 0.9060976652698195, 'recall': 0.8775780667605401, 'f1-score': 0.8916098628292518, 'support': 12073.0} | 0.8952 | {'precision': 0.8466712466157955, 'recall': 0.8585770359074323, 'f1-score': 0.8522955841376315, 'support': 29705.0} | {'precision': 0.8983418837129342, 'recall': 0.8952364921730348, 'f1-score': 0.8965624790680554, 'support': 29705.0} |
| No log | 6.0 | 246 | 0.2768 | {'precision': 0.6175036567528035, 'recall': 0.607725527831094, 'f1-score': 0.6125755743651752, 'support': 4168.0} | {'precision': 0.9085303186022611, 'recall': 0.8215613382899628, 'f1-score': 0.8628599316739872, 'support': 2152.0} | {'precision': 0.9999116061168567, 'recall': 1.0, 'f1-score': 0.9999558011049724, 'support': 11312.0} | {'precision': 0.8816429034348672, 'recall': 0.9014329495568624, 'f1-score': 0.8914281033706025, 'support': 12073.0} | 0.8920 | {'precision': 0.8518971212266971, 'recall': 0.8326799539194798, 'f1-score': 0.8417048526286843, 'support': 29705.0} | {'precision': 0.8915666503464329, 'recall': 0.8919710486450093, 'f1-score': 0.8915603797680256, 'support': 29705.0} |
| No log | 7.0 | 287 | 0.2715 | {'precision': 0.6254587155963303, 'recall': 0.6542706333973128, 'f1-score': 0.6395403377110694, 'support': 4168.0} | {'precision': 0.8875878220140515, 'recall': 0.8805762081784386, 'f1-score': 0.8840681128994634, 'support': 2152.0} | {'precision': 0.9999116061168567, 'recall': 1.0, 'f1-score': 0.9999558011049724, 'support': 11312.0} | {'precision': 0.8987139615028998, 'recall': 0.8856125238134681, 'f1-score': 0.8921151439299123, 'support': 12073.0} | 0.8963 | {'precision': 0.8529180263075345, 'recall': 0.8551148413473049, 'f1-score': 0.8539198489113544, 'support': 29705.0} | {'precision': 0.8981038432990452, 'recall': 0.8963474162598889, 'f1-score': 0.8971595644270212, 'support': 29705.0} |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|