Papers
arxiv:2406.14909

MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression

Published on Jun 21
ยท Submitted by Foxfi on Jun 28
Authors:
,
,
,
,

Abstract

Sparse attention can effectively mitigate the significant memory and throughput demands of Large Language Models (LLMs) in long contexts. Existing methods typically employ a uniform sparse attention mask, applying the same sparse pattern across different attention heads and input lengths. However, this uniform approach fails to capture the diverse attention patterns inherent in LLMs, ignoring their distinct accuracy-latency trade-offs. To address this challenge, we propose the Mixture of Attention (MoA), which automatically tailors distinct sparse attention configurations to different heads and layers. MoA constructs and navigates a search space of various attention patterns and their scaling rules relative to input sequence lengths. It profiles the model, evaluates potential configurations, and pinpoints the optimal sparse attention compression plan. MoA adapts to varying input sizes, revealing that some attention heads expand their focus to accommodate longer sequences, while other heads consistently concentrate on fixed-length local contexts. Experiments show that MoA increases the effective context length by 3.9times with the same average attention span, boosting retrieval accuracy by 1.5-7.1times over the uniform-attention baseline across Vicuna-7B, Vicuna-13B, and Llama3-8B models. Moreover, MoA narrows the capability gaps between sparse and dense models, reducing the maximum relative performance drop from 9%-36% to within 5% across two long-context understanding benchmarks. MoA achieves a 1.2-1.4times GPU memory reduction and boosts decode throughput by 5.5-6.7 times for 7B and 13B dense models on a single GPU, with minimal impact on performance.

Community

Paper author Paper submitter

Compressing the attention operation is crucial for the efficiency of processing long inputs. Existing sparse attention methods (more specifically, local attention methods) adopt uniform and fixed attention masks across different attention heads. Nevertheless, some heads need to attend to more distant information than others; and as the input sequence gets longer, some heads might need to increase their span more than others. In this work, we propose MoA that overcomes the drawbacks of uniform sparse attention by searching heterogeneous elastic rules for each attention head using an automatic pipeline.

Nice ๐Ÿ‘Œ

Does the 5/6 fold increase in throughput hold true when using a batch size of 1, i.e local inference

ยท
Paper author
โ€ข
edited 9 days ago

Hey, thanks for your attention and question! Our throughput improvement comes from three factors: (1) Static-size KV cache; (2) Reduced attention computation; (3) Reduced KV cache so that we can use larger batch size. In our experiments, these three factors contribute around 3.0x, 1.5x, and 1.4x improvements with 7B/13B models on A100.

Table 7 in our paper shows the throughput improvements when using the same batch size. For example, when processing 16k inputs with a batch size of 8, (1) and (2) contribute an overall 5.4x improvement in throughput (from 32.9 tokens/s to 178.3 tokens/s for the 7B model). We didn't experiment with batch size=1.

79521719566761_.pic.jpg

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.14909 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.14909 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.14909 in a Space README.md to link it from this page.

Collections including this paper 3