--- license: other library_name: transformers base_model: - Qwen/Qwen2.5-72B-Instruct license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE model-index: - name: Replete-LLM-V2.5-Qwen-72b_Duplicated results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 71.55 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 61.27 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 47.58 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 19.8 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 17.32 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 54.83 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated name: Open LLM Leaderboard --- # Rombos-LLM-V2.5-Qwen-72b ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/wp9qOi2K2WGzkey0I3SgH.jpeg) Rombos-LLM-V2.5-Qwen-72b is a continues finetuned version of Qwen2.5-72B. I noticed recently that the Qwen team did not learn from my methods of continuous finetuning, the great benefits, and no downsides of it. So I took it upon myself to merge the instruct model with the base model myself using the *Ties* merge method This version of the model shows higher performance than the original instruct and base models. Quants: (Coming soon) GGUF: https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-72b-GGUF EXL2: Benchmarks: (Coming soon) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_rombodawg__Replete-LLM-V2.5-Qwen-72b_Duplicated) | Metric |Value| |-------------------|----:| |Avg. |45.39| |IFEval (0-Shot) |71.55| |BBH (3-Shot) |61.27| |MATH Lvl 5 (4-Shot)|47.58| |GPQA (0-shot) |19.80| |MuSR (0-shot) |17.32| |MMLU-PRO (5-shot) |54.83|