Qwen2.5-Coder Collection Code-specific model series based on Qwen2.5 • 40 items • Updated 5 days ago • 198
LayerSkip Collection Models continually pretrained using LayerSkip - https://arxiv.org/abs/2404.16710 • 8 items • Updated 22 days ago • 43
Llama-3.1-Nemotron-70B Collection SOTA models on Arena Hard and RewardBench as of 1 Oct 2024. • 6 items • Updated Oct 15 • 136
Whisper Release Collection Whisper includes both English-only and multilingual checkpoints for ASR and ST, ranging from 38M params for the tiny models to 1.5B params for large. • 12 items • Updated Sep 13, 2023 • 87
Llama 3.2 Evals Collection This collection provides detailed information on how we derived the reported benchmark metrics for the Llama 3.2 models, including the configurations • 4 items • Updated Sep 25 • 20
Llama 3.2 Collection This collection hosts the transformers and original repos of the Llama 3.2 and Llama Guard 3 • 15 items • Updated 24 days ago • 468
🪐 SmolLM Collection A series of smol LLMs: 135M, 360M and 1.7B. We release base and Instruct models as well as the training corpus and some WebGPU demos • 12 items • Updated Aug 18 • 197
Parler-TTS: Expresso ☕️ Collection Parler-TTS v0.1 fine-tuned on the Expresso dataset, for expressive, voice-consistent generations. • 3 items • Updated Aug 7 • 6
Parler-TTS: fully open-source high-quality TTS Collection If you want to find out more about how these models were trained and even fine-tune them yourself, check-out the Parler-TTS repository on GitHub. • 7 items • Updated Aug 8 • 46
Llama 3.1 Collection This collection hosts the transformers and original repos of the Llama 3.1, Llama Guard 3 and Prompt Guard models • 11 items • Updated Sep 25 • 618
PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications Paper • 1701.05517 • Published Jan 19, 2017 • 3
view article Article BigCodeBench: Benchmarking Large Language Models on Solving Practical and Challenging Programming Tasks Jun 18 • 41
LLM Compiler Collection Meta LLM Compiler is a state-of-the-art LLM that builds upon Code Llama with improved performance for code optimization and compiler reasoning. • 4 items • Updated Jun 27 • 148