Commit
•
369669b
1
Parent(s):
0826817
Create README.md (#1)
Browse files- Create README.md (5fe3d4ccecfa7bc6d0cc6cee7c86c8445bfb45f6)
Co-authored-by: doyoung kim <[email protected]>
README.md
ADDED
@@ -0,0 +1,94 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
**Official repository**: [seonghyeonye/Flipped-Learning](https://github.com/seonghyeonye/Flipped-Learning)
|
2 |
+
# Model Description
|
3 |
+
FLIPPED uses a unique meta-learning method to show zero-shot task generalization on classification natural language prompts, outperforming GPT-3 and T0-11B on many tasks with a 4x smaller scale.
|
4 |
+
It is a series of encoder-decoder model trained on a numerous classification dataset. We show inputs and its corresponding outputs of each instances in each dataset to FLIPPED, and train it to generate its possible instruction. We add unlikelyhood loss in order **not** to generate the instruction when given the same input, but a wrong output. To obtain FLIPPED, we fine-tune a T5 model in a given scale on a multitask mixture covering many different classification NLP tasks.
|
5 |
+
# Intended uses
|
6 |
+
You can use the models to perform inference on tasks by specifying your input-output NLP query in a "input: {input}\noutput: {output}" form , and the model will predict the instruction. For example, You can try
|
7 |
+
*"input: <extra_id_0> this is the best cast iron skillet you will ever buy<extra_id_1>\noutput: Positive"*
|
8 |
+
as an input, and the model will hopefully generate *"Title: Review:"*.
|
9 |
+
|
10 |
+
# How to use
|
11 |
+
Our overall explanation models along with ablations can be found in our [paper](https://arxiv.org/abs/2210.02969). We recommend using the [FLIPPED-11B](seonghyeonye/flipped_11B) checkpoint as it leads (on average) to the best performances on a variety of NLP tasks.
|
12 |
+
|Model|Number of parameters|
|
13 |
+
|-|-|
|
14 |
+
|[Flipped_11B](https://huggingface.co/seonghyeonye/flipped_11B)|11 billion|
|
15 |
+
|[Flipped_3B](https://huggingface.co/seonghyeonye/flipped_3B)|3 billion|
|
16 |
+
Here is how to use the model in PyTorch:
|
17 |
+
```python
|
18 |
+
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
19 |
+
tokenizer = AutoTokenizer.from_pretrained("seonghyeonye/flipped_11B")
|
20 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("seonghyeonye/flipped_11B")
|
21 |
+
inputs = tokenizer.encode("input: this is the best cast iron skillet you will ever buy\noutput: Positive", return_tensors="pt")
|
22 |
+
outputs = model.generate(inputs)
|
23 |
+
print(tokenizer.decode(outputs[0]))
|
24 |
+
```
|
25 |
+
If you want to use another checkpoint, please replace the path in `AutoTokenizer` and `AutoModelForSeq2SeqLM`.
|
26 |
+
**Note: the model was trained with bf16 activations. As such, we highly discourage running inference with fp16. fp32 or bf16 should be preferred.**
|
27 |
+
|
28 |
+
# Training procedure
|
29 |
+
FLIPPED models are based on [T5](https://huggingface.co/google/t5-v1_1-large), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4).
|
30 |
+
At a high level, the input text along with output label is fed to the encoder and the instruction text is produced by the decoder. The model is fine-tuned to autoregressively generate the target. We also feed input text along with a wrong input, adding an unlikelyhood loss in order not to make model produce the proper instruction in that case. Here are our training details.
|
31 |
+
Training details:
|
32 |
+
- Fine-tuning steps: 5'000
|
33 |
+
- Input sequence length: 384(512 for 3B)
|
34 |
+
- Target sequence length: 64
|
35 |
+
- Batch size: 1
|
36 |
+
- Optimizer: Adafactor
|
37 |
+
- Learning rate: 5e-5
|
38 |
+
- Dropout: 0.1
|
39 |
+
- Sampling strategy: proportional to the number of examples in each dataset (we randomly sampled any dataset if it has over 500'000 examples so that it has at most 500'000 examples. Also, we randomly choose which instruction to generate for each training steps, so ideally each instruction appears *num_examples/num_templates* while training.)
|
40 |
+
|
41 |
+
# Training data
|
42 |
+
We trained different variants T0 with different mixtures of datasets.
|
43 |
+
|Model|Training datasets|
|
44 |
+
|--|--|
|
45 |
+
|FLIPPED|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Topic Classification: AG News, DBPedia<br>- Paraphrase Identification: MRPC, PAWS, QQP|
|
46 |
+
|FLIPPED_3B|Same as T0 but starting from a T5-LM (3B parameters) pre-trained model|
|
47 |
+
We only choose prompts examples that has output lables, which can be found on the dataset page.
|
48 |
+
|
49 |
+
# Evaluation data
|
50 |
+
|
51 |
+
We evaluate our models on following datasets:
|
52 |
+
|Task category|Datasets|
|
53 |
+
|-|-|
|
54 |
+
|Natural language inference|ANLI(R1, R2, R3), CB, RTE|
|
55 |
+
|Coreference resolution|WSC, Winogrande|
|
56 |
+
|Word sense disambiguation|WiC|
|
57 |
+
|Sentence completion|COPA, HellaSwag, Story Cloze|
|
58 |
+
|QA|PIQA, ARC-Challenge, OpenbookQA|
|
59 |
+
We also evaluate FLIPPED on a subset of [BIG-bench benchmark](https://github.com/google/BIG-bench):
|
60 |
+
- Code description task
|
61 |
+
- Conceptual combinations
|
62 |
+
- Hindu knowledge json
|
63 |
+
- Known unknowns
|
64 |
+
- Language identification
|
65 |
+
- Logic grid puzzle task
|
66 |
+
- Logical deduction
|
67 |
+
- Common misconceptions
|
68 |
+
- Movie dialog same or different
|
69 |
+
- Novel concepts
|
70 |
+
- Strategyqa
|
71 |
+
- Formal fallacies syllogisms negation
|
72 |
+
- VitaminC
|
73 |
+
- Winowhy multiple choice
|
74 |
+
|
75 |
+
# Label generalization
|
76 |
+
We evaluate the robustness of models on following datasets with changing the output label of the datasets. The substitute words can be found in our [paper](https://arxiv.org/abs/2210.02969).
|
77 |
+
|Task category|(Datasets, Template name)|
|
78 |
+
|-|-|
|
79 |
+
|Unseen tasks|(WSC, does the pronoun refer to), (CB, can we infer), (RTE, MNLI crowdsource)|
|
80 |
+
|Seen tasks|(IMDB, Reviewer Enjoyment Yes No), (PAWS, Meaning) |
|
81 |
+
The template name we used can be found in the [promptsource template library](https://github.com/bigscience-workshop/promptsource/tree/main/promptsource/templates).
|
82 |
+
# BibTeX entry and citation info
|
83 |
+
```bibtex
|
84 |
+
@misc{https://doi.org/10.48550/arxiv.2210.02969,
|
85 |
+
doi = {10.48550/ARXIV.2210.02969},
|
86 |
+
url = {https://arxiv.org/abs/2210.02969},
|
87 |
+
author = {Ye, Seonghyeon and Kim, Doyoung and Jang, Joel and Shin, Joongbo and Seo, Minjoon},
|
88 |
+
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
|
89 |
+
title = {Guess the Instruction! Making Language Models Stronger Zero-Shot Learners},
|
90 |
+
publisher = {arXiv},
|
91 |
+
year = {2022},
|
92 |
+
copyright = {Creative Commons Attribution 4.0 International}
|
93 |
+
}
|
94 |
+
```
|