DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models Paper • 2401.06066 • Published Jan 11 • 42
GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding Paper • 2006.16668 • Published Jun 30, 2020 • 3
OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models Paper • 2402.01739 • Published Jan 29 • 26