File size: 740 Bytes
a8e577d
7bf3c36
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
Pretrained models for our paper (https://arxiv.org/abs/2210.10258)
```bibtex
@inproceedings{wu-etal-2022-continued,
    title = "Continued Pretraining for Better Zero- and Few-Shot Promptability",
    author = "Zhaofeng Wu and Robert L. Logan IV and Pete Walsh and Akshita Bhagia and Dirk Groeneveld and Sameer Singh and Iz Beltagy",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    publisher = "Association for Computational Linguistics",
}
```

Please see the "Files and versions" tab for the models. We release our MTL models (notated MTL-T🔥P🔥 in our paper) and the meta-learned models, with different sizes and prompt configurations.