Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design
Abstract
The proliferation of large language models (LLMs) has led to the adoption of Mixture-of-Experts (MoE) architectures that dynamically leverage specialized subnetworks for improved efficiency and performance. Despite their benefits, MoE models face significant challenges during inference, including inefficient memory management and suboptimal batching, due to misaligned design choices between the model architecture and the system policies. Furthermore, the conventional approach of training MoEs from scratch is increasingly prohibitive in terms of cost. In this paper, we propose a novel framework Read-ME that transforms pre-trained dense LLMs into smaller MoE models (in contrast to "upcycling" generalist MoEs), avoiding the high costs of ground-up training. Our approach employs activation sparsity to extract experts. To compose experts, we examine the widely-adopted layer-wise router design and show its redundancy, and thus we introduce the pre-gating router decoupled from the MoE backbone that facilitates system-friendly pre-computing and lookahead scheduling, enhancing expert-aware batching and caching. Our codesign therefore addresses critical gaps on both the algorithmic and system fronts, establishing a scalable and efficient alternative for LLM inference in resource-constrained settings. Read-ME outperforms other popular open-source dense models of similar scales, achieving improvements of up to 10.1% on MMLU, and improving mean end-to-end latency up to 6.1%. Codes are available at: https://github.com/VITA-Group/READ-ME.
Community
Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design.
Convert a pre-trained LLM into a system-optimized MoE with a decoupled router through system co-design. Read-ME surpasses other popular open-source dense models of similar scale, achieving up to a 10.1% improvement on MMLU and reducing mean end-to-end latency by up to 6.1%.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ExpertFlow: Optimized Expert Activation and Token Allocation for Efficient Mixture-of-Experts Inference (2024)
- MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router (2024)
- Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning (2024)
- MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More (2024)
- Upcycling Large Language Models into Mixture of Experts (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper