ZhaofengWu
commited on
Commit
•
7bf3c36
1
Parent(s):
8c1c38c
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Pretrained models for our paper
|
2 |
+
```bibtex
|
3 |
+
@inproceedings{wu-etal-2022-continued,
|
4 |
+
title = "Continued Pretraining for Better Zero- and Few-Shot Promptability",
|
5 |
+
author = "Zhaofeng Wu and Robert L. Logan IV and Pete Walsh and Akshita Bhagia and Dirk Groeneveld and Sameer Singh and Iz Beltagy",
|
6 |
+
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
|
7 |
+
month = dec,
|
8 |
+
year = "2022",
|
9 |
+
publisher = "Association for Computational Linguistics",
|
10 |
+
}
|
11 |
+
```
|
12 |
+
|
13 |
+
Please see the "Files and versions" tab for the models. We release our MTL models (notated MTL-T🔥P🔥 in our paper) and the meta-learned models, with different sizes and prompt configurations.
|