Compute-efficient-inference FlashDecoding++: Faster Large Language Model Inference on GPUs Paper • 2311.01282 • Published Nov 2, 2023 • 35 Exponentially Faster Language Modelling Paper • 2311.10770 • Published Nov 15, 2023 • 118 Neural Network Diffusion Paper • 2402.13144 • Published Feb 20 • 94
FlashDecoding++: Faster Large Language Model Inference on GPUs Paper • 2311.01282 • Published Nov 2, 2023 • 35