File size: 1,782 Bytes
8f4ab82 1826e0e 19e9977 b21d000 c8c6562 b21d000 c8c6562 b21d000 8f4ab82 1826e0e 8f4ab82 1826e0e 8f4ab82 b64de11 1826e0e 8f4ab82 1826e0e 8f4ab82 b64de11 8f4ab82 1826e0e 8f4ab82 b64de11 8f4ab82 1826e0e 8f4ab82 1826e0e 8f4ab82 1826e0e b64de11 1ed82e4 b64de11 8f4ab82 1826e0e 8f4ab82 1826e0e 8f4ab82 1826e0e 23c0f06 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
---
tags:
- generated_from_keras_callback
- AVeriTec
model-index:
- name: deberta-v3-large-AVeriTeC-nli
results:
- task:
type: text-classification
dataset:
name: chenxwh/AVeriTeC
type: chenxwh/AVeriTeC
metrics:
- name: dev macro F1 score
type: macro F1 score
value: 0.71
- name: dev macro recall
type: macro recall
value: 0.73
- name: dev macro precision
type: macro precision
value: 0.71
- name: dev accuracy
type: accuracy
value: 0.82
license: mit
language:
- en
library_name: transformers
pipeline_tag: text-classification
base_model: microsoft/deberta-v3-large
datasets:
- chenxwh/AVeriTeC
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-AVeriTeC-nli
This model was finetuned from microsoft/deberta-v3-large on an AVeriTec dataset.
It achieves the following results on the evaluation set:
## Intended uses & limitations
This model is intended for usage in a pipeline for open-domain fact-checking task.
## Training and evaluation data
See chenxwh/AVeriTeC
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: adamw_torch
- training_precision: float16
- learning_rate: 1e-5
- per_device_train_batch_size: 32
- num_train_epochs: 10
- weight_decay: 0.01
- load_best_model_at_end: True #early stopping!
- warmup_ratio: 0.06
### Training results
### Framework versions
- Transformers 4.43.0
- TensorFlow 2.17.0
- Datasets 2.20.0
- Tokenizers 0.19.1 |