Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Schisandra

Many thanks to the authors of the models used!

RPMax v1.1 | Pantheon-RP | Cydonia v1.2 | Magnum V4 | ChatWaifu v2.0 | SorcererLM | Acolyte | NovusKyver


Overview

Main uses: RP, Storywriting

Merge of 8 Mistral Small finetunes in total, which were then merged back into the original model to make it less stupid. Worked somehow? Definitely smarter than my previous MS merge and maybe some finetunes. Seems to really adhere to the writing style of the previous output, so you'll need either a good character card or an existing chat for a better replies.


Quants

Static

Imatrix


Settings

Prompt format: Mistral-V3 Tekken

Samplers: These or These


Merge Details

Merging steps

QCmix

base_model: InferenceIllusionist/SorcererLM-22B
parameters:
  int8_mask: true
  rescale: true
  normalize: false
dtype: bfloat16
tokenizer_source: base
merge_method: della
models:
  - model: Envoid/Mistral-Small-NovusKyver
    parameters:
      density: [0.35, 0.65, 0.5, 0.65, 0.35]
      epsilon: [0.1, 0.1, 0.25, 0.1, 0.1]
      lambda: 0.85
      weight: [-0.01891, 0.01554, -0.01325, 0.01791, -0.01458]
  - model: rAIfle/Acolyte-22B
    parameters:
      density: [0.6, 0.4, 0.5, 0.4, 0.6]
      epsilon: [0.15, 0.15, 0.25, 0.15, 0.15]
      lambda: 0.85
      weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421]

Schisandra-vA

merge_method: della_linear
dtype: bfloat16
parameters:
  normalize: true
  int8_mask: true
tokenizer_source: union
base_model: TheDrummer/Cydonia-22B-v1.2
models:
    - model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
      parameters:
        density: 0.55
        weight: 1
    - model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
      parameters:
        density: 0.55
        weight: 1
    - model: spow12/ChatWaifu_v2.0_22B
      parameters:
        density: 0.55
        weight: 1
    - model: anthracite-org/magnum-v4-22b
      parameters:
        density: 0.55
        weight: 1
    - model: QCmix
      parameters:
        density: 0.55
        weight: 1

Schisandra

dtype: bfloat16
tokenizer_source: base
merge_method: della_linear
parameters:
  density: 0.5
base_model: Schisandra
models:
  - model: unsloth/Mistral-Small-Instruct-2409
    parameters:
      weight:
        - filter: v_proj
          value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
        - filter: o_proj
          value: [1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1]
        - filter: up_proj
          value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
        - filter: gate_proj
          value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
        - filter: down_proj
          value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        - value: 0
  - model: Schisandra
    parameters:
      weight:
        - filter: v_proj
          value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
        - filter: o_proj
          value: [0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0]
        - filter: up_proj
          value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        - filter: gate_proj
          value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
        - filter: down_proj
          value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
        - value: 1
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for async0x42/MS-Schisandra-22B-v0.1-exl2_4.5bpw