--- base_model: - mistralai/Mistral-Large-Instruct-2411 - anthracite-org/magnum-v4-123b tags: - exl2 - mistral-large - writing --- # Magnum-V4-2411-Merge-exl2-4.5bpw EXL2 Quant of [gghfez/Magnum-V4-2411-Merge](https://huggingface.co/gghfez/Magnum-V4-2411-Merge) A merge of [anthracite-org/magnum-v4-123b](https://huggingface.co/anthracite-org/magnum-v4-123b) the new [mistralai/Mistral-Large-Instruct-2411](https://huggingface.co/mistralai/Mistral-Large-Instruct-2411) via LoRA extraction. ## Model Details - **Base Model**: [mistralai/Mistral-Large-Instruct-2411](https://huggingface.co/mistralai/Mistral-Large-Instruct-2411) - **Influence Model**: [anthracite-org/magnum-v4-123b](https://huggingface.co/anthracite-org/magnum-v4-123b) - **Method**: LoRA extraction from magnum-v4-123b and then applied to Mistral-Large-Instruct-2411 ## Prompting A typical input would look like this: ```py [INST] SYSTEM MESSAGE\nUSER MESSAGE[/INST] ASSISTANT MESSAGE[INST] USER MESSAGE[/INST] ``` ## Results I haven't tested it extensively, but I don't see much difference between this and [anthracite-org/magnum-v4-123b](https://huggingface.co/anthracite-org/magnum-v4-123b).