YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the official Google BERT repository. These BERT variants were introduced in the paper Well-Read Students Learn Better: On the Importance of Pre-training Compact Models. These models are trained on MNLI.
If you use the model, please consider citing the paper
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Original Implementation and more info can be found in this Github repository.
MNLI: 75.86%
MNLI-mm: 77.03%
These models are trained for 4 epochs.
- Downloads last month
- 1,102
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.