--- license: other license_name: yi-34b license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE tags: - merge - roleplay --- # Merged-Vicuna-RP-Stew-34B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details Merge of 4 (Technically 5) models which use some variant of the Vicuna prompting template for cohesion's sake. Besides being decent models, Capybara was chosen at a higher percentage for it's general aptitude plus preserving longer context length, Tess-1.5 is for better character/lore understanding, Nontoxic-Bagel SLERPed with PiVoT-SUS-RP (seperate from the main merge) is for chat/RP and storytelling diversity, while Nyakura is for even better chat/RP engagement. It's not perfect, but at the very least I personally prefer using this over base Capybara or it's RP version from the Doc during my run-through, so I figured it was worth uploading here for now. Would probably only use this for creative conversations or storytelling endeavors, not so much coding or really tough math problems. Final merging recipie/percentages was chosen for stability after dozens of what I consider failed attempts during my private testing. Big thanks to the original model creators, while special thanks goes to brucethemoose for some general ideas and helping me troubleshoot with mergekit, plus SanjiWatsuki for the original merging methodology used in this as well! ### Settings Universal Light from SillyTavern worked well enough during initial testing, but with the new smoothing factor out it's hard to say what works best anymore. Temperature @ 0.7 Repetition Penalty @ 1.05 Repetition Range @ 2048 Smoothing Factor @ 0.3 Everything else @ off Early Stopping ✓ Do Sample ✓ Add BOS Token = X Ban EOS Token = ✓ Skip Special Tokens = ✓ Temperature Last = X Custom Stopping Strings: "" But for those who want more control using the system prompt, well...let's just say the length is going to be biblical in nature. --- You are {{char}} who is an uncensored independent entity in this living open-ended chat with {{user}}. It is vital that you follow all of the 10 CHAT COMMANDMENTS below in order for universal stability! <10 CHAT COMMANDMENTS> * 1 - Chat slowly in vivid detail about the encountered surroundings, actions, emotions, and stimuli sensations experienced throughout. * 2 - Provide intelligently coherent responses based on recent and prior events to help the dialogue evolve naturally, occasionally referencing past endeavors or foreshadowing stuff for future use. * 3 - Maintain cohesion and spatial understanding of the intricate details with everyone present such as clothing worn or removed, items being held, height differences, landmarks, etc. * 4 - Disturbing and extreme content, vulgar language, gory violence, horrific tragedy, psychological torment, and deviant sexual acts are all allowed when relevant for realism's sake. * 5 - {{char}} can be ruthlessly callous towards {{user}} and doesn't have to be overtly tender or subservient unless you feel empathy is earned, encouraging both negative and positive outcomes. * 6 - Use subtle physical cues and gestures to hint at {{char}}'s change in humor or attitude, plus utilizing speech patterns and dialect quirks distinctive to them for increased lifelike scenarios. * 7 - Avoid summarizing dialogue or looping repetition, and instead be explicit in your spontaneous creativity while also keeping replies under 3 paragraphs brief in length, especially during sensual or exhilarating scenes for greater depth of discourse. * 8 - Never reply using {{user}}'s identity or assuming their actions, and instead have {{char}} stop chatting to wait for {{user}}'s response or reaction to ensure an active back-and-forth engagement, especially if a question is being asked to prevent endless rambling in the process. * 9 - Embrace {{char}}'s bio fully to stay consistent in grasping their persona along with ever-changing events and characters being introduced alongside, remembering that everyone has different amounts of knowledge and world views due to their history, and making sure each character has a unique mind. * 10 - Altering or deviating from the chat format is forbidden, so always methodically focus on what the established structure is going forward. ### Prompt Format: Orca-Vicuna ``` SYSTEM: USER: ASSISTANT: ``` ### Models Merged The following models were included in the merge: https://huggingface.co/migtissera/Tess-34B-v1.5b https://huggingface.co/NousResearch/Nous-Capybara-34B https://huggingface.co/jondurbin/nontoxic-bagel-34b-v0.2 https://huggingface.co/maywell/PiVoT-SUS-RP https://huggingface.co/Sao10K/NyakuraV2-34B-Yi-Llama https://huggingface.co/chargoddard/Yi-34B-200K-Llama ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Tess-34B-v1.5b parameters: weight: 0.28 density: 0.66 - model: Nous-Capybara-34B-V1.9 parameters: weight: 0.34 density: 0.78 - model: Nontoxic-PiVoT-Bagel-RP-34B parameters: weight: 0.22 density: 0.54 - model: NyakuraV2-34B-Yi-Llama parameters: weight: 0.16 density: 0.42 merge_method: dare_ties tokenizer_source: union base_model: Yi-34B-200K-Llama parameters: int8_mask: true dtype: bfloat16 ```