Edit model card

UNLI model fine-tuned from ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli, using UNLI. If you find this model useful, please cite the paper:

@inproceedings{chen-etal-2020-uncertain,
    title = "Uncertain Natural Language Inference",
    author = "Chen, Tongfei  and
      Jiang, Zhengping  and
      Poliak, Adam  and
      Sakaguchi, Keisuke  and
      Van Durme, Benjamin",
    editor = "Jurafsky, Dan  and
      Chai, Joyce  and
      Schluter, Natalie  and
      Tetreault, Joel",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.acl-main.774",
    doi = "10.18653/v1/2020.acl-main.774",
    pages = "8772--8779",
    abstract = "We introduce Uncertain Natural Language Inference (UNLI), a refinement of Natural Language Inference (NLI) that shifts away from categorical labels, targeting instead the direct prediction of subjective probability assessments. We demonstrate the feasibility of collecting annotations for UNLI by relabeling a portion of the SNLI dataset under a probabilistic scale, where items even with the same categorical label differ in how likely people judge them to be true given a premise. We describe a direct scalar regression modeling approach, and find that existing categorically-labeled NLI data can be used in pre-training. Our best models correlate well with humans, demonstrating models are capable of more subtle inferences than the categorical bin assignment employed in current NLI tasks.",
}
Downloads last month
62
Safetensors
Model size
355M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Zhengping/roberta-large-unli