Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning
Abstract
We introduce Adapters, an open-source library that unifies parameter-efficient and modular transfer learning in large language models. By integrating 10 diverse adapter methods into a unified interface, Adapters offers ease of use and flexible configuration. Our library allows researchers and practitioners to leverage adapter modularity through composition blocks, enabling the design of complex adapter setups. We demonstrate the library's efficacy by evaluating its performance against full fine-tuning on various NLP tasks. Adapters provides a powerful tool for addressing the challenges of conventional fine-tuning paradigms and promoting more efficient and modular transfer learning. The library is available via https://adapterhub.ml/adapters.
Community
I firmly believe this adapter approach is the future of smart workflows - and in parallel, forgot how great roberta and bert models are at certain classification and Q&A tasks.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Non-Intrusive Adaptation: Input-Centric Parameter-efficient Fine-Tuning for Versatile Multimodal Modeling (2023)
- ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale (2023)
- Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning (2023)
- BLIP-Adapter: Parameter-Efficient Transfer Learning for Mobile Screenshot Captioning (2023)
- Audio-AdapterFusion: A Task-ID-free Approach for Efficient and Non-Destructive Multi-task Speech Recognition (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
Efficient Transfer Learning Made Easy with Adapters: A Unified Library Explained
Links đź”—:
👉 Subscribe: https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper