File size: 1,251 Bytes
98778c9 |
1 |
{"paper_url": "https://huggingface.co/papers/1701.06538", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [MoNDE: Mixture of Near-Data Experts for Large-Scale Sparse Models](https://huggingface.co/papers/2405.18832) (2024)\n* [U2++ MoE: Scaling 4.7x parameters with minimal impact on RTF](https://huggingface.co/papers/2404.16407) (2024)\n* [A Provably Effective Method for Pruning Experts in Fine-tuned Sparse Mixture-of-Experts](https://huggingface.co/papers/2405.16646) (2024)\n* [Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters](https://huggingface.co/papers/2406.05955) (2024)\n* [Graph Knowledge Distillation to Mixture of Experts](https://huggingface.co/papers/2406.11919) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"} |