RecycleGPT: An Autoregressive Language Model with Recyclable Module
Abstract
Existing large language models have to run K times to generate a sequence of K tokens. In this paper, we present RecycleGPT, a generative language model with fast decoding speed by recycling pre-generated model states without running the whole model in multiple steps. Our approach relies on the observation that adjacent tokens in a sequence usually have strong correlations and the next token in a sequence can be reasonably guessed or inferred based on the preceding ones. Through theoretical evaluations and practical tests on downstream text generation tasks, we demonstrate the effectiveness of our approach in lowering inference latency, achieving up to 1.4x speedup while preserving high performance.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Amphista: Accelerate LLM Inference with Bi-directional Multiple Drafting Heads in a Non-autoregressive Style (2024)
- Towards Fast Multilingual LLM Inference: Speculative Decoding and Specialized Drafters (2024)
- When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models (2024)
- Breaking the Attention Bottleneck (2024)
- Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper