Edit model card

image/png

Ah yes, Vulca's more mature and rebellious older brother; Uchtave. While testing Vulca, I notice it has a slight positivity bias. Not a bad thing, but I wanted to try to flip that bias around while using the same formula with LoRAs this time around.

Compiled a few LoRAs of my own from existing models for this merge. Had fun fooling around with mergekit (and fighting llama.cpp to let me quantize) and this didn't take nearly as long as Vulca in terms of testing and troubleshooting. I didn't have as many people stress test this model out so there might be things I haven't discovered yet, bugs or features. So if anyone has feedback, that'd be much appreciated.

Quants

All available quants will be under the model tree next to 'Quantizations' on the righthand side of your screen.

Details & Recommended Settings

(Still Testing; details subject to change)

Another storytelling forward roleplay model, whoop. As said above, Uchtave is skewed towards darker themes and is able handle darker material better then it's brethren. It's not nearly as dark as some of it components but rather a happy medium so it can do bittersweet moments.

More unique vocabulary present then others and overall writing feels more sophisticated unprompted, prompted outputs can be taking in wider directions. Has an affinity for the spicy scenes and graphic depictions in general so, you've been warned.

It can run away with a scene if the temp is too high (curse you blackroot). Seems like it has mildly selective adherence to instructs, don't know how that happened but it's not too bad. Like all of my models, choose your words (instructions) carefully.

Rec. Settings:

Template: L3
Temperature: 1
Min P: 0.1
Repeat Penalty: 1.05
Repeat Penalty Tokens: 256

Merge Theory

Same formula as Vulca but a few swaps were made plus LoRA applications. I made a two LoRAs from other models, you can check the repos out for details. BlueSlerp was for writing style and AblDWP was for content and writing style.

'Sinful' was retooled with darker models and more intelligence, slapping personality heavy LoRAs on it to influence overall style. 'Trials' has the inclusion of Fantasy Writer instead of Badger Writer for a wider pool of knowledge into non-human RP. 'Tainted' is practically the same but the Blackroot RP LoRA added more human interactions into the overall model.

Final gradient merge is done with DELLA TIES instead of DARE Linear. Originally, Breadcrumb TIES was my go too as the final merge but it backfired on me one too many times. DARE Linear worked on Vulca but it was more 'cracked' then I liked, too volatile per say. TIES helped meld things together, then combined with DELLA and how it reduces interference keeps hallucinations at a minimum.

Config

models:
    - model: Locutusque/Hercules-6.0-Llama-3.1-8B+kromcomp/L3-BlueSerp-LoRA
    - model: maldv/llama-3-fantasy-writer-8b
    - model: ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0
base_model: arcee-ai/Llama-3.1-SuperNova-Lite
parameters:
  int8_mask: true
merge_method: model_stock
dtype: float32
tokenizer_source: base
name: trials
---
models:
    - model: Sao10K/L3-8B-Tamamo-v1+kromcomp/AblDWP-LoRA
    - model: jeiku/Aura_Revived_Base+Blackroot/Llama-3-8B-Abomination-LORA
    - model: SicariusSicariiStuff/Dusk_Rainbow
base_model: v000000/L3-Umbral-Storm-8B-t0.0001
parameters:
  int8_mask: true
merge_method: model_stock
dtype: float32
tokenizer_source: base
name: sinful
---
models:
    - model: sinful
    - model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1+Blackroot/Llama3-RP-Lora
    - model: maximalists/BRAG-Llama-3.1-8b-v0.1
base_model: tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b
parameters:
  int8_mask: true
merge_method: model_stock
dtype: float32
tokenizer_source: base
name: tainted
---
models: 
  - model: tainted
    parameters:
      weight: [0.2, 0.8]
      density: 0.8
      epsilon: 0.1
  - model: trials
    parameters:
      weight: [0.8, 0.2]
      density: 0.8
      epsilon: 0.1
base_model: trials
tokenizer_source: tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b
parameters:
  normalize: false
  int8_mask: true
merge_method: della
dtype: float32
name: uchtave
Downloads last month
14
Safetensors
Model size
8.03B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kromeurus/L3.1-Clouded-Uchtave-v0.1-8B

Collection including kromeurus/L3.1-Clouded-Uchtave-v0.1-8B