Papers
arxiv:2411.07501

LAuReL: Learned Augmented Residual Layer

Published on Nov 12
Authors:
,

Abstract

One of the core pillars of efficient deep learning methods is architectural improvements such as the residual/skip connection, which has led to significantly better model convergence and quality. Since then the residual connection has become ubiquitous in not just convolutional neural networks but also transformer-based architectures, the backbone of LLMs. In this paper we introduce Learned Augmented Residual Layer (LAuReL) -- a novel generalization of the canonical residual connection -- with the goal to be an in-situ replacement of the latter while outperforming on both model quality and footprint metrics. Our experiments show that using \laurel can help boost performance for both vision and language models. For example, on the ResNet-50, ImageNet 1K task, it achieves 60% of the gains from adding an extra layer, while only adding 0.003% more parameters, and matches it while adding 2.6times fewer parameters.

Community

Hi, this was a wonderful read. We summarised this paper and a few others in our biweekly blog.

  1. nGPT: Normalized Transformer with Representation Learning on the Hypersphere
  2. LAUREL: Learned Augmented Residual Layer
  3. TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters

Please give a read and share your thoughts/feedback.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2411.07501 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2411.07501 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2411.07501 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.