File size: 1,651 Bytes
7d7886a 2ecf428 7d7886a 2ecf428 7d7886a 2ecf428 7d7886a 2ecf428 7d7886a 2ecf428 7d7886a 2ecf428 7d7886a 2ecf428 7d7886a 2ecf428 7d7886a e02aaf2 2ecf428 7d7886a 2ecf428 7d7886a 2ecf428 7d7886a 2ecf428 7d7886a 2ecf428 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
---
library_name: transformers
tags:
- natural-language-inference
- nli
license: mit
datasets:
- nyu-mll/multi_nli
language:
- en
base_model: microsoft/deberta-v3-base
---
# deberta v3 base - Natural Language Inference
## Model overview
This model is trained for the Natural Language Inference task. It takes two sentences as input (a premise and a hypothesis) and predicts the relationship between them by assigning one of three labels: "entailment," "neutral," or "contradiction." The model is based on the [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) model, fine-tuned on the [nyu-mll/multi_nli](https://huggingface.co/datasets/nyu-mll/multi_nli) dataset, and returns scores corresponding to the labels.
## Results
After fine-tuning on the dataset, the model achieved the following results:
- Loss: 0.276
- Accuracy: 0.899
- F1-Score: 0.899
These metrics were evaluated on the `validation_mismatched` split of the dataset.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "chincyk/deberta-v3-base-nli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "The flight arrived on time at the airport."
hypothesis = "The flight was delayed by several hours."
inputs = tokenizer(premise, hypothesis, return_tensors='pt')
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.softmax(logits, dim=-1).squeeze()
id2label = model.config.id2label
for i, prob in enumerate(probs):
print(f"{id2label[i]}: {prob:.4f}")
``` |