---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
## MiquMaid v2 DPO
Check out our blogpost about this model series [Here!](https://ikaridevgit.github.io/index.html?blog=blogid-6&bo=true#Miqu-base) - Join our Discord server [Here!](https://discord.gg/Bb8pRUXy3Z)
[V2-70B - V2-70B-DPO - V2-2x70B - V2-2x70B-DPO]
This model uses the Alpaca **prompting format**
Model trained for RP conversation on Miqu-70B with our magic sauce, then trained on DPO for uncensoring.
## Credits:
- Undi
- IkariDev
## Description
This repo contains FP16 files of MiquMaid-v2-70B-DPO.
Switch: [FP16](https://huggingface.co/NeverSleep/MiquMaid-v2-70B-DPO) - [GGUF](https://huggingface.co/NeverSleep/MiquMaid-v2-70B-DPO-GGUF)
## Training data used:
- [Aesir datasets](https://huggingface.co/MinervaAI)
- [NoRobots](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt)
- [limarp](https://huggingface.co/datasets/lemonilia/LimaRP)
- [toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt)
- [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
## DPO training data used:
- [ToxicDPOqa](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicDPOqa)
- [toxic-dpo-v0.1-NoWarning](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-NoWarning)
### Custom format:
```
### Instruction:
{system prompt}
### Input:
{input}
### Response:
{reply}
```
## Others
Undi: If you want to support us, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek