MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression Paper • 2406.14909 • Published Jun 21 • 13
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Paper • 2307.15337 • Published Jul 28, 2023 • 36