Adaptive Depth Transformers
Implementation of the paper "How Many Layers and Why? An Analysis of the Model Depth in Transformers". In this study, we investigate the role of the multiple layers in deep transformer models. We design a variant of ALBERT that dynamically adapts the number of layers for each token of the input.
Model architecture
We augment a multi-layer transformer encoder with a halting mechanism, which dynamically adjusts the number of layers for each token. We directly adapted this mechanism from Graves (2016). At each iteration, we compute a probability for each token to stop updating its state.
Model use
The architecture is not yet directly included in the Transformers library. The code used for pre-training is available in the following github repository. So you should install the code implementation first:
!pip install git+https://github.com/AntoineSimoulin/adaptive-depth-transformers$
Then you can use the model directly.
from act import AlbertActConfig, AlbertActModel, TFAlbertActModel
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained('asi/albert-act-base')
model = AlbertActModel.from_pretrained('asi/albert-act-base')
_ = model.eval()
inputs = tokenizer("a lump in the middle of the monkeys stirred and then fell quiet .", return_tensors="pt")
outputs = model(**inputs)
outputs.updates
# tensor([[[[15., 9., 10., 7., 3., 8., 5., 7., 12., 10., 6., 8., 8., 9., 5., 8.]]]])
Citations
BibTeX entry and citation info
If you use our iterative transformer model for your scientific publication or your industrial applications, please cite the following paper:
@inproceedings{simoulin-crabbe-2021-many,
title = "How Many Layers and Why? {A}n Analysis of the Model Depth in Transformers",
author = "Simoulin, Antoine and
Crabb{\'e}, Benoit",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-srw.23",
doi = "10.18653/v1/2021.acl-srw.23",
pages = "221--228",
}
References
Alex Graves. 2016. Adaptive computation time for recurrent neural networks. CoRR, abs/1603.08983.
- Downloads last month
- 7
Datasets used to train asi/albert-act-base
Evaluation results
- Matthew's Corr on CoLAself-reported36.700
- Accuracy on SST-2self-reported87.800
- Accuracy on MRPCself-reported81.400
- F1 on MRPCself-reported86.500
- Spearman Corr on STS-Bself-reported83.000
- Pearson Corr on STS-Bself-reported84.200
- F1 on QQPself-reported68.500
- Accuracy on QQPself-reported87.700
- Accuracy on MNLI-mself-reported79.900
- Accuracy on MNLI-mmself-reported79.200