view article Article Accelerating LLM Inference: Fast Sampling with Gumbel-Max Trick By cxdu • 26 days ago • 8
A Unified View of Delta Parameter Editing in Post-Trained Large-Scale Models Paper • 2410.13841 • Published Oct 17 • 14
I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm Paper • 2408.08072 • Published Aug 15 • 32
Nemotron 4 340B Collection Nemotron-4: open models for Synthetic Data Generation (SDG). Includes Base, Instruct, and Reward models. • 4 items • Updated 16 days ago • 158
Eurus Collection Advancing LLM Reasoning Generalists with Preference Trees • 11 items • Updated 28 days ago • 24
Weak-to-Strong Extrapolation Expedites Alignment Collection Better aligned models obtained by weak-to-strong model extrapolation (ExPO) • 25 items • Updated 23 days ago • 16
Qwen1.5 Collection Qwen1.5 is the improved version of Qwen, the large language model series developed by Alibaba Cloud. • 55 items • Updated Sep 18 • 206
Purple Llama CyberSecEval: A Secure Coding Benchmark for Language Models Paper • 2312.04724 • Published Dec 7, 2023 • 20
Pythia Scaling Suite Collection Pythia is the first LLM suite designed specifically to enable scientific research on LLMs. To learn more see https://github.com/EleutherAI/pythia • 18 items • Updated Nov 21, 2023 • 24