--- license: other license_name: tongyi-qianwen-license-agreement license_link: https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT base_model: moreh/MoMo-72B-lora-1.8.7-DPO --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/pf4d6FA7DriRtVq5HCkxd.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/e4u8VYfDBh11u60rFYJHF.png) Smaug arrives! We recently released Smaug-72B-v0.1 which has taken first place on the Open LLM Leaderboard by HuggingFace. It is the first open-source model to have an average score more than 80. Smaug-72B is finetuned directly from [moreh/MoMo-72B-lora-1.8.7-DPO](https://huggingface.co/moreh/MoMo-72B-lora-1.8.7-DPO) and is ultimately based on [Qwen-72B](https://huggingface.co/Qwen/Qwen-72B). To do so we built on techniques and datasets used in our previous model efforts, with the addition of some new datasets and a new approach. We believe this new approach is generally useful in training across a wide range of model types and downstream use cases, and it powers both our Smaug-34B and 72B models. We are currently working on writing up this new technique in the form of a technical report which we aim to release on arXiv soon (we may also be releasing a new member of the Smaug lineup at that time!). We are excited to release the details to the open source community for them to build on and improve Smaug and spawn more dragons to dominate the LLM space. Keep watching this space for our announcements! ### Evaluation Results | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | --- | --- | --- | --- | --- | --- | --- | | 80.48 | 76.02 | 89.27 | 77.15 | 76.67 | 85.08 | 78.70 | ### Contamination Results We generate our contamination numbers using https://github.com/swj0419/detect-pretrain-code-contamination/tree/master, with Llama7B as our reference model. Smaug-72B has the following results: | ARC | TruthfulQA | GSM8K | | --- | --- | --- | | 0.20| 0.45| 1.00| By comparison, MoMo-72B-lora-1.8.7-DPO has the following results: | ARC | TruthfulQA | GSM8K | | --- | --- | --- | | 0.20| 0.39| 1.00| Note that GSM8K often scores very highly on this contamination suite - we verified this by also running Llama-2-70B: | ARC | TruthfulQA | GSM8K | | --- | --- | --- | | 0.22| 0.51| 0.89|