more info
is it for RP?
The intent was for RP/RPG use on my laptop when I'm bored on planes, or similar situations. It seems a little brain damaged though -- I'm still working on improving it.
The intent was for RP/RPG use on my laptop when I'm bored on planes, or similar situations. It seems a little brain damaged though -- I'm still working on improving it.
thanks for replying, what prompt format do you recommend?
suggestion: I suggest using the PIPPA dataset or PIPPA-based datasets if you'd like, they have RP and ERP on it.
The intent was for RP/RPG use on my laptop when I'm bored on planes, or similar situations. It seems a little brain damaged though -- I'm still working on improving it.
thanks for replying, what prompt format do you recommend?
suggestion: I suggest using the PIPPA dataset or PIPPA-based datasets if you'd like, they have RP and ERP on it.
I considered PIPPA, but wanted to minimize the amount of GPT-isms, and limarp seems to include the best of most human datasets.
It uses a ChatML format, although it seems that it might not like system prompts, so you might want to try just starting with <|im_start|>user
instead of system
if you get really bad results by default.
hey I did a merge here https://huggingface.co/Aryanne/testo3km
using your models(see test.yml for more info) if you wanna test it, if you find it good I will keep it public
hey I did a merge here https://huggingface.co/Aryanne/testo3km
using your models(see test.yml for more info) if you wanna test it, if you find it good I will keep it public
chatml doesn't seems to work very well on it, vicuna format seems to be ok on it, next merge I will put less zephyr and more echo.
Did another: https://huggingface.co/Aryanne/Astrea-RP-v1-3B and quantized(at the gguf branch), it has more echo, seems to be better at rp
hey I did a merge here https://huggingface.co/Aryanne/testo3km
using your models(see test.yml for more info) if you wanna test it, if you find it good I will keep it publicchatml doesn't seems to work very well on it, vicuna format seems to be ok on it, next merge I will put less zephyr and more echo.
Did another: https://huggingface.co/Aryanne/Astrea-RP-v1-3B and quantized(at the gguf branch), it has more echo, seems to be better at rp
Awesome.
Soon I'm going to be making a version that does rank finetuning (something sorta like DPO or PRO) over preference data from r/WritingPrompts. Also probably will include posts from r/ShortStories, and will also use oasst2.