Igor Molybog
igormolybog
AI & ML interests
Optimization, Machine Learning
Organizations
None yet
Collections
17
-
FlashDecoding++: Faster Large Language Model Inference on GPUs
Paper • 2311.01282 • Published • 35 -
Co-training and Co-distillation for Quality Improvement and Compression of Language Models
Paper • 2311.02849 • Published • 3 -
Prompt Cache: Modular Attention Reuse for Low-Latency Inference
Paper • 2311.04934 • Published • 28 -
Exponentially Faster Language Modelling
Paper • 2311.10770 • Published • 118
models
None public yet
datasets
None public yet