datasets:
- multi_nli
- snli
- scitail
metrics:
- accuracy
- f1
pipeline_tag: zero-shot-classification
language:
- en
model-index:
- name: AntoineBlanot/flan-t5-xxl-classif-3way
results:
- task:
type: nli
name: Natural Language Inference
dataset:
type: multi_nli
name: MultiNLI
split: validation_matched
metrics:
- type: accuracy
value: 0.9230769230769231
name: Validation macthed accuracy
- type: f1
value: 0.9225172687920663
name: Validation macthed f1
- task:
type: nli
name: Natural Language Inference
dataset:
type: multi_nli
name: MultiNLI
split: validation_mismatched
metrics:
- type: accuracy
value: 0.9222945484133441
name: Validation mismacthed accuracy
- type: f1
value: 0.9216699467726924
name: Validation mismacthed f1
T5ForSequenceClassification
T5ForSequenceClassification adapts the original T5 architecture for sequence classification tasks.
T5 was originally built for text-to-text tasks and excels in it. It can handle any NLP task if it has been converted to a text-to-text format, including sequence classification task! You can find here how the original T5 is used for sequence classification task.
Our motivations for building T5ForSequenceClassification is that the full original T5 architecture is not needed for most NLU tasks. Indeed, NLU tasks generally do not require to generate text and thus a large decoder is unnecessary. By removing the decoder we can half the original number of parameters (thus half the computation cost) and efficiently optimize the network for the given task.
Table of Contents
Usage
T5ForSequenceClassification supports the task of zero-shot classification. It can direclty be used for:
- topic classification
- intent recognition
- boolean question answering
- sentiment analysis
- and any other task which goal is to clasify a text...
Since the T5ForClassification class is currently not supported by the transformers library, you cannot direclty use this model on the Hub. To use T5ForSequenceClassification, you will have to install additional packages and model weights. You can find instructions here.
Why use T5ForSequenceClassification?
Models based on the BERT architecture like RoBERTa and DeBERTa have shown very strong performance on sequence classification task and are still widely used today. However, those models only scale up to ~1.5B parameters (DeBERTa xxlarge) resulting in a limited knowledge compare to bigger models. On the other hand, models based on the T5 architecture scale up to ~11B parameters (t5-xxl) and innovations with this architecture are very recent and keeps improving (mT5, Flan-T5, UL2, Flan-UL2, and probably more...)
T5ForClassification vs T5
T5ForClassification Architecture:
- Encoder: same as original T5
- Decoder: only the first layer (for pooling purpose)
- Classification head: simple Linear layer on top of the decoder
Benefits and Drawbacks:
- (+) Keeps T5 encoding strength
- (+) Parameters size is half
- (+) Interpretable outputs (class logits)
- (+) No generation mistakes and faster prediction (no generation latency)
- (-) Looses text-to-text ability
Special thanks to philschmid for making a Flan-T5-xxl checkpoint in fp16.