Papers
arxiv:2303.15647
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
Published on Mar 28, 2023
Authors:
Abstract
This paper presents a systematic overview and comparison of parameter-efficient fine-tuning methods covering over 40 papers published between February 2019 and February 2023. These methods aim to resolve the infeasibility and impracticality of fine-tuning large language models by only training a small set of parameters. We provide a taxonomy that covers a broad range of methods and present a detailed method comparison with a specific focus on real-life efficiency and fine-tuning multibillion-scale language models.
Models citing this paper 0
No model linking this paper
Cite arxiv.org/abs/2303.15647 in a model README.md to link it from this page.
Datasets citing this paper 0
No dataset linking this paper
Cite arxiv.org/abs/2303.15647 in a dataset README.md to link it from this page.
Spaces citing this paper 0
No Space linking this paper
Cite arxiv.org/abs/2303.15647 in a Space README.md to link it from this page.