longformer-sep_tok / README.md
Theoreticallyhugo's picture
trainer: training complete at 2024-03-02 15:07:21.342326.
62ded06 verified
|
raw
history blame
16.1 kB
metadata
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
  - generated_from_trainer
datasets:
  - essays_su_g
metrics:
  - accuracy
model-index:
  - name: longformer-sep_tok
    results:
      - task:
          name: Token Classification
          type: token-classification
        dataset:
          name: essays_su_g
          type: essays_su_g
          config: sep_tok
          split: train[80%:100%]
          args: sep_tok
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.9037199124726477

longformer-sep_tok

This model is a fine-tuned version of allenai/longformer-base-4096 on the essays_su_g dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4627
  • Claim: {'precision': 0.6641901931649331, 'recall': 0.6434740882917467, 'f1-score': 0.6536680477699245, 'support': 4168.0}
  • Majorclaim: {'precision': 0.9209900047596382, 'recall': 0.8991635687732342, 'f1-score': 0.909945920526687, 'support': 2152.0}
  • O: {'precision': 1.0, 'recall': 0.9999115983026874, 'f1-score': 0.9999557971975424, 'support': 11312.0}
  • Premise: {'precision': 0.8908200734394125, 'recall': 0.9042491509980949, 'f1-score': 0.8974843801381124, 'support': 12073.0}
  • Accuracy: 0.9037
  • Macro avg: {'precision': 0.8690000678409959, 'recall': 0.8616996015914409, 'f1-score': 0.8652635364080665, 'support': 29705.0}
  • Weighted avg: {'precision': 0.9027835705096182, 'recall': 0.9037199124726477, 'f1-score': 0.9031988198412557, 'support': 29705.0}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 16

Training results

Training Loss Epoch Step Validation Loss Claim Majorclaim O Premise Accuracy Macro avg Weighted avg
No log 1.0 41 0.3480 {'precision': 0.45800144822592326, 'recall': 0.30350287907869483, 'f1-score': 0.3650793650793651, 'support': 4168.0} {'precision': 0.6792168674698795, 'recall': 0.6287174721189591, 'f1-score': 0.652992277992278, 'support': 2152.0} {'precision': 0.9994626063591581, 'recall': 0.986474540311174, 'f1-score': 0.9929261022378432, 'support': 11312.0} {'precision': 0.8190918322936313, 'recall': 0.9353101963058064, 'f1-score': 0.873351637727677, 'support': 12073.0} 0.8439 {'precision': 0.7389431885871479, 'recall': 0.7135012719536586, 'f1-score': 0.7210873457592908, 'support': 29705.0} {'precision': 0.8269800178224755, 'recall': 0.8439319979801381, 'f1-score': 0.8316056073620907, 'support': 29705.0}
No log 2.0 82 0.2758 {'precision': 0.6302521008403361, 'recall': 0.32389635316698656, 'f1-score': 0.427892234548336, 'support': 4168.0} {'precision': 0.7593291404612159, 'recall': 0.841542750929368, 'f1-score': 0.7983248842847697, 'support': 2152.0} {'precision': 0.9997344192634561, 'recall': 0.9983203677510608, 'f1-score': 0.9990268931351733, 'support': 11312.0} {'precision': 0.833381357153148, 'recall': 0.9582539551064359, 'f1-score': 0.8914659988441533, 'support': 12073.0} 0.8760 {'precision': 0.805674254429539, 'recall': 0.7805033567384627, 'f1-score': 0.779177502703108, 'support': 29705.0} {'precision': 0.8628640276786139, 'recall': 0.876047803400101, 'f1-score': 0.8606332672536217, 'support': 29705.0}
No log 3.0 123 0.2410 {'precision': 0.620671283963772, 'recall': 0.559021113243762, 'f1-score': 0.5882352941176471, 'support': 4168.0} {'precision': 0.8549924736578023, 'recall': 0.79182156133829, 'f1-score': 0.822195416164053, 'support': 2152.0} {'precision': 0.9999115357395613, 'recall': 0.9992043847241867, 'f1-score': 0.999557835160948, 'support': 11312.0} {'precision': 0.8765607712976135, 'recall': 0.918744305475027, 'f1-score': 0.897156953936992, 'support': 12073.0} 0.8897 {'precision': 0.8380340161646872, 'recall': 0.8171978411953165, 'f1-score': 0.8267863748449099, 'support': 29705.0} {'precision': 0.8860669651248813, 'recall': 0.8897155361050328, 'f1-score': 0.8873759763571568, 'support': 29705.0}
No log 4.0 164 0.2487 {'precision': 0.6344057431534167, 'recall': 0.5724568138195777, 'f1-score': 0.6018413419094463, 'support': 4168.0} {'precision': 0.8230162027420025, 'recall': 0.9205390334572491, 'f1-score': 0.8690502303136654, 'support': 2152.0} {'precision': 0.9998231027772864, 'recall': 0.9992927864214993, 'f1-score': 0.9995578742594394, 'support': 11312.0} {'precision': 0.8858637887335459, 'recall': 0.8974571357574753, 'f1-score': 0.8916227781435153, 'support': 12073.0} 0.8923 {'precision': 0.8357772093515627, 'recall': 0.8474364423639503, 'f1-score': 0.8405180561565165, 'support': 29705.0} {'precision': 0.8894248936462209, 'recall': 0.8923076923076924, 'f1-score': 0.8904302737876794, 'support': 29705.0}
No log 5.0 205 0.2594 {'precision': 0.6126252038201724, 'recall': 0.6309980806142035, 'f1-score': 0.6216759248315801, 'support': 4168.0} {'precision': 0.8722222222222222, 'recall': 0.8754646840148699, 'f1-score': 0.8738404452690167, 'support': 2152.0} {'precision': 1.0, 'recall': 0.9992927864214993, 'f1-score': 0.9996462681287585, 'support': 11312.0} {'precision': 0.8930364914630063, 'recall': 0.8837902758220824, 'f1-score': 0.8883893260064111, 'support': 12073.0} 0.8917 {'precision': 0.8444709793763502, 'recall': 0.8473864567181638, 'f1-score': 0.8458879910589416, 'support': 29705.0} {'precision': 0.8929161297147813, 'recall': 0.8917017337148628, 'f1-score': 0.8922798455096741, 'support': 29705.0}
No log 6.0 246 0.2812 {'precision': 0.5880121396054628, 'recall': 0.7437619961612284, 'f1-score': 0.6567796610169492, 'support': 4168.0} {'precision': 0.8901355773726041, 'recall': 0.8847583643122676, 'f1-score': 0.8874388254486133, 'support': 2152.0} {'precision': 1.0, 'recall': 0.999557991513437, 'f1-score': 0.9997789469030461, 'support': 11312.0} {'precision': 0.9225448257031037, 'recall': 0.8395593473039012, 'f1-score': 0.8790980052038161, 'support': 12073.0} 0.8903 {'precision': 0.8501731356702926, 'recall': 0.8669094248227085, 'f1-score': 0.8557738596431061, 'support': 29705.0} {'precision': 0.9027534099005212, 'recall': 0.8903214946978623, 'f1-score': 0.8944647582453118, 'support': 29705.0}
No log 7.0 287 0.3027 {'precision': 0.6093205574912892, 'recall': 0.6713051823416507, 'f1-score': 0.6388127853881279, 'support': 4168.0} {'precision': 0.905252822778596, 'recall': 0.8568773234200744, 'f1-score': 0.8804010503700167, 'support': 2152.0} {'precision': 1.0, 'recall': 0.9998231966053748, 'f1-score': 0.9999115904871364, 'support': 11312.0} {'precision': 0.8979262281149074, 'recall': 0.8750931831359231, 'f1-score': 0.886362682998448, 'support': 12073.0} 0.8927 {'precision': 0.8531249020961982, 'recall': 0.8507747213757557, 'f1-score': 0.8513720273109323, 'support': 29705.0} {'precision': 0.8968327052777144, 'recall': 0.8926780003366437, 'f1-score': 0.894437008359695, 'support': 29705.0}
No log 8.0 328 0.3308 {'precision': 0.6094457623463446, 'recall': 0.6780230326295585, 'f1-score': 0.64190800681431, 'support': 4168.0} {'precision': 0.8877551020408163, 'recall': 0.8489776951672863, 'f1-score': 0.8679334916864608, 'support': 2152.0} {'precision': 1.0, 'recall': 0.9993811881188119, 'f1-score': 0.9996904982977407, 'support': 11312.0} {'precision': 0.9026911576249466, 'recall': 0.875176012590077, 'f1-score': 0.8887206661619985, 'support': 12073.0} 0.8929 {'precision': 0.8499730055030268, 'recall': 0.8503894821264335, 'f1-score': 0.8495631657401276, 'support': 29705.0} {'precision': 0.8975192480409824, 'recall': 0.8929136509005218, 'f1-score': 0.8948422476293271, 'support': 29705.0}
No log 9.0 369 0.3408 {'precision': 0.651685393258427, 'recall': 0.6261996161228407, 'f1-score': 0.63868836412578, 'support': 4168.0} {'precision': 0.9157330735509012, 'recall': 0.8736059479553904, 'f1-score': 0.8941736028537456, 'support': 2152.0} {'precision': 1.0, 'recall': 0.9993811881188119, 'f1-score': 0.9996904982977407, 'support': 11312.0} {'precision': 0.8864851725814292, 'recall': 0.9062370578977884, 'f1-score': 0.8962523039115298, 'support': 12073.0} 0.9001 {'precision': 0.8634759098476893, 'recall': 0.8513559525237079, 'f1-score': 0.857201192297199, 'support': 29705.0} {'precision': 0.8988863080948749, 'recall': 0.9000504965494025, 'f1-score': 0.8993525560304815, 'support': 29705.0}
No log 10.0 410 0.4050 {'precision': 0.6122782446311859, 'recall': 0.6293186180422264, 'f1-score': 0.6206814955040227, 'support': 4168.0} {'precision': 0.8178170144462279, 'recall': 0.9470260223048327, 'f1-score': 0.8776916451335055, 'support': 2152.0} {'precision': 0.9999116061168567, 'recall': 1.0, 'f1-score': 0.9999558011049724, 'support': 11312.0} {'precision': 0.9038395316804407, 'recall': 0.869626439161766, 'f1-score': 0.8864029718434716, 'support': 12073.0} 0.8912 {'precision': 0.8334615992186778, 'recall': 0.8614927698772062, 'f1-score': 0.8461829783964931, 'support': 29705.0} {'precision': 0.8932830396594146, 'recall': 0.89116310385457, 'f1-score': 0.8917298769484514, 'support': 29705.0}
No log 11.0 451 0.4124 {'precision': 0.5987719669701461, 'recall': 0.6785028790786948, 'f1-score': 0.6361489146327746, 'support': 4168.0} {'precision': 0.9018375241779497, 'recall': 0.866635687732342, 'f1-score': 0.8838862559241706, 'support': 2152.0} {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 11312.0} {'precision': 0.90087915876573, 'recall': 0.8657334548165327, 'f1-score': 0.8829567053854277, 'support': 12073.0} 0.8907 {'precision': 0.8503721624784565, 'recall': 0.8527180054068924, 'f1-score': 0.8507479689855932, 'support': 29705.0} {'precision': 0.8963053356048198, 'recall': 0.8906581383605454, 'f1-score': 0.8929650968879477, 'support': 29705.0}
No log 12.0 492 0.4421 {'precision': 0.6705202312138728, 'recall': 0.5566218809980806, 'f1-score': 0.6082852648138437, 'support': 4168.0} {'precision': 0.9058880308880309, 'recall': 0.8722118959107806, 'f1-score': 0.8887310606060606, 'support': 2152.0} {'precision': 0.9999115513886432, 'recall': 0.9993811881188119, 'f1-score': 0.9996462994075515, 'support': 11312.0} {'precision': 0.8699774617237895, 'recall': 0.9271929097987244, 'f1-score': 0.8976744186046511, 'support': 12073.0} 0.8987 {'precision': 0.861574318803584, 'recall': 0.8388519687065994, 'f1-score': 0.8485842608580267, 'support': 29705.0} {'precision': 0.8940729416216161, 'recall': 0.8987039218986702, 'f1-score': 0.8952534731823099, 'support': 29705.0}
0.1687 13.0 533 0.4406 {'precision': 0.6625574087503021, 'recall': 0.6576295585412668, 'f1-score': 0.6600842865743528, 'support': 4168.0} {'precision': 0.8832599118942731, 'recall': 0.9316914498141264, 'f1-score': 0.9068294889190412, 'support': 2152.0} {'precision': 0.9999115826702034, 'recall': 0.9997347949080623, 'f1-score': 0.9998231809742728, 'support': 11312.0} {'precision': 0.9014848181514848, 'recall': 0.8951379110411662, 'f1-score': 0.8983001537758197, 'support': 12073.0} 0.9043 {'precision': 0.8618034303665659, 'recall': 0.8710484285761554, 'f1-score': 0.8662592775608715, 'support': 29705.0} {'precision': 0.9041218866445364, 'recall': 0.9042922066992088, 'f1-score': 0.9041543829763381, 'support': 29705.0}
0.1687 14.0 574 0.4457 {'precision': 0.6631526104417671, 'recall': 0.6338771593090211, 'f1-score': 0.6481844946025515, 'support': 4168.0} {'precision': 0.9062062529164723, 'recall': 0.9024163568773235, 'f1-score': 0.9043073341094298, 'support': 2152.0} {'precision': 0.9999115748518879, 'recall': 0.9996463932107497, 'f1-score': 0.9997789664471067, 'support': 11312.0} {'precision': 0.8905371260901459, 'recall': 0.90499461608548, 'f1-score': 0.897707665762879, 'support': 12073.0} 0.9028 {'precision': 0.8649518910750683, 'recall': 0.8602336313706436, 'f1-score': 0.8624946152304918, 'support': 29705.0} {'precision': 0.9014182930351262, 'recall': 0.9028109745834034, 'f1-score': 0.902044324986091, 'support': 29705.0}
0.1687 15.0 615 0.4688 {'precision': 0.6694429984383133, 'recall': 0.6170825335892515, 'f1-score': 0.6421972534332085, 'support': 4168.0} {'precision': 0.925692083535697, 'recall': 0.8856877323420075, 'f1-score': 0.905248159582047, 'support': 2152.0} {'precision': 0.9999115826702034, 'recall': 0.9997347949080623, 'f1-score': 0.9998231809742728, 'support': 11312.0} {'precision': 0.882983832239475, 'recall': 0.9137745382257931, 'f1-score': 0.8981153580005699, 'support': 12073.0} 0.9028 {'precision': 0.8695076242209221, 'recall': 0.8540698997662786, 'f1-score': 0.8613459879975246, 'support': 29705.0} {'precision': 0.9006427002542411, 'recall': 0.9028446389496718, 'f1-score': 0.9014549312254513, 'support': 29705.0}
0.1687 16.0 656 0.4627 {'precision': 0.6641901931649331, 'recall': 0.6434740882917467, 'f1-score': 0.6536680477699245, 'support': 4168.0} {'precision': 0.9209900047596382, 'recall': 0.8991635687732342, 'f1-score': 0.909945920526687, 'support': 2152.0} {'precision': 1.0, 'recall': 0.9999115983026874, 'f1-score': 0.9999557971975424, 'support': 11312.0} {'precision': 0.8908200734394125, 'recall': 0.9042491509980949, 'f1-score': 0.8974843801381124, 'support': 12073.0} 0.9037 {'precision': 0.8690000678409959, 'recall': 0.8616996015914409, 'f1-score': 0.8652635364080665, 'support': 29705.0} {'precision': 0.9027835705096182, 'recall': 0.9037199124726477, 'f1-score': 0.9031988198412557, 'support': 29705.0}

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2