Edit model card

DistilRoBERTa for NLI

Model description

This model can be used for Natural Language Inference (NLI) tasks. It is a version of roberta-base fine-tuned on multi_nli and english xnli.

Model Performance

The model's performance on NLI tasks is as follows:

  • Accuracy on MNLI validation matched: TODO
  • Accuracy on MNLI validation mismatched: TODO
Downloads last month
37
Safetensors
Model size
82.1M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train matekadlicsko/distilroberta-nli