Adapter bert-base-uncased_nli_rte_pfeiffer
for bert-base-uncased
Adapter in Pfeiffer architecture trained on the RTE task for 20 epochs with early stopping and a learning rate of 1e-4. See https://arxiv.org/pdf/2007.07779.pdf.
This adapter was created for usage with the Adapters library.
Usage
First, install adapters
:
pip install -U adapters
Now, the adapter can be loaded and activated like this:
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased_nli_rte_pfeiffer")
model.set_active_adapters(adapter_name)
Architecture & Training
- Adapter architecture: pfeiffer
- Prediction head: classification
- Dataset: RTE
Author Information
- Author name(s): Clifton Poth
- Author email: [email protected]
- Author links: Website, GitHub, Twitter
Citation
@article{pfeiffer2020AdapterHub,
title={AdapterHub: A Framework for Adapting Transformers},
author={Jonas Pfeiffer and
Andreas R\"uckl\'{e} and
Clifton Poth and
Aishwarya Kamath and
Ivan Vuli\'{c} and
Sebastian Ruder and
Kyunghyun Cho and
Iryna Gurevych},
journal={arXiv preprint},
year={2020},
url={https://arxiv.org/abs/2007.07779}
}
This adapter has been auto-imported from https://github.com/Adapter-Hub/Hub/blob/master/adapters/ukp/bert-base-uncased_nli_rte_pfeiffer.yaml.
- Downloads last month
- 107
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.