magnum-v2.5-12b-kto-exl2
Original model: magnum-v2.5-12b-kto
Creator: anthracite-org
Quants
4bpw h6 (main)
4.5bpw h6
5bpw h6
6bpw h6
8bpw h8
Quantization notes
Made with exllamav2 0.2.2 with the default dataset.
These quants are for RTX cards on Windows/Linux or AMD on Linux.
Use with Text-Generation-WebUI, TabbyAPI, etc.
Original model card
v2.5 KTO is an experimental release; we are testing a hybrid reinforcement learning strategy of KTO + DPOP, using rejected data sampled from the original model as "rejected". For "chosen", we use data from the original finetuning dataset as "chosen". This was done on a limited portion of of primarily instruction following data; we plan to scale up a larger KTO dataset in the future for better generalization.
This is the 5th in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of anthracite-org/magnum-12b-v2.
Prompting
Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
Credits
- Stheno dataset (filtered)
- kalomaze/Opus_Instruct_25k
- Nopm/Opus_WritingStruct
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned (A ~16k rows subset)
- kalomaze/Opus_Instruct_3k
This model has been a team effort, and the credits goes to all members of Anthracite.
Safety
...
- Downloads last month
- 6
Model tree for cgus/magnum-v2.5-12b-kto-exl2
Base model
mistralai/Mistral-Nemo-Base-2407