MN-12B-Siskin-v0.2 / README.md
Nohobby's picture
Update README.md
536ac77 verified
|
raw
history blame
2.76 kB
metadata
base_model:
  - ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
  - NeverSleep/Lumimaid-v0.2-12B
  - nbeerbower/Lyra-Gutenberg-mistral-nemo-12B
  - v000000/NM-12B-Lyris-dev-3
  - elinas/Chronos-Gold-12B-1.0
library_name: transformers
tags:
  - mergekit
  - merge

Siskin 0.2

Хе-хе, сискин

The 0.1 version's writing is more "human" — check it out here

Overview

Somewhat experimental merge of some Nemo models. Consists mostly of human data, so there should be very little gptisms/claudisms. Models work with about ~28k context and could maybe be stretched further.

Note: Both v0.1 and v0.2 can write in the first person despite the initial message. Specify the narrative style somewhere in the prompt to fix this if you don't like it.

Prompt template: Mistral

<s>[INST] {input} [/INST] {output}</s>

Quants

Static

Imatrix

Merge Details

Merge Method

This model was merged using the della_linear merge method using ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
    parameters:
      weight: [0.2, 0.3, 0.2, 0.3, 0.2]
      density: [0.45, 0.55, 0.45, 0.55, 0.45]
  - model: NeverSleep/Lumimaid-v0.2-12B
    parameters:
      weight: [0.165, 0.295, 0.295, 0.165, 0.165, 0.295, 0.295, 0.165]
      density: [0.5]
  - model: elinas/Chronos-Gold-12B-1.0
    parameters:
      weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421]
      density: [0.6, 0.4, 0.5, 0.4, 0.6]
  - model: v000000/NM-12B-Lyris-dev-3
    parameters:
      weight: [0.208, 0.139, 0.139, 0.139, 0.208]
      density: [0.7]
  - model: nbeerbower/Lyra-Gutenberg-mistral-nemo-12B
    parameters:
      weight: [0.33]
      density: [0.45, 0.55, 0.45, 0.55, 0.45]
merge_method: della_linear
base_model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
parameters:
  epsilon: 0.04
  lambda: 1.05
  int8_mask: true
  rescale: true
  normalize: false
dtype: bfloat16
tokenizer_source: base