File size: 132 Bytes
2c2320e |
1 |
Mistral 7B trained for 0.1 epoch on https://huggingface.co/datasets/Magpie-Align/Magpie-Qwen2.5-Pro-1M-v0.1 using LoRA+ in unsloth. |
2c2320e |
1 |
Mistral 7B trained for 0.1 epoch on https://huggingface.co/datasets/Magpie-Align/Magpie-Qwen2.5-Pro-1M-v0.1 using LoRA+ in unsloth. |