Edit model card

Attempt number three (five) that aims to fix the overly chatty and flower language of v0.2. Updated version here

Quants

A few GGUFs by me.

Details & Recommended Settings

Mid, same properties of v0.1 but suffers at long context.

Rec. Settings:

Template: L3
Temperature: 1.3
Min P: 0.1
Repeat Penalty: 1.05
Repeat Penalty Tokens: 256
Dyn Temp: 0.9-1.05 at 0.1
Smooth Sampl: 0.18

Merge Theory

Can't be asked rn.

slices:
- sources:
  - layer_range: [0, 16]
    model: ArliAI/ArliAI-Llama-3-8B-Formax-v1.0
- sources:
  - layer_range: [16, 32]
    model: gradientai/Llama-3-8B-Instruct-Gradient-1048k
parameters:
  int8_mask: true
merge_method: passthrough
dtype: float32
out_dtype: bfloat16
name: formax.ext
---
models: 
  - model: formax.ext
    parameters:
      weight: 1
base_model: ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0
parameters:
  normalize: false
  int8_mask: true
merge_method: dare_linear
dtype: float32
out_dtype: bfloat16
tokenizer_source: base
name: formaxext.3.1
---
models: 
  - model: Sao10K/L3-8B-Niitama-v1
    parameters:
      weight: 0.6
  - model: Sao10K/L3-8B-Stheno-v3.3-32K
    parameters:
      weight: 0.5
base_model: tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b
parameters:
  normalize: false
  int8_mask: true
merge_method: dare_linear
dtype: float32
out_dtype: bfloat16
tokenizer_source: base
name: siith.3.1
---
models:
    - model: Sao10K/L3-8B-Tamamo-v1
    - model: siith.3.1
base_model: vicgalle/Roleplay-Hermes-3-Llama-3.1-8B
parameters:
  normalize: false
  int8_mask: true
merge_method: model_stock
dtype: float32
out_dtype: bfloat16
name: siithamol3.1
---
models: 
  - model: siithamol3.1
    parameters:
      weight: [0.5, 0.8, 0.9, 1]
      density: 0.9
      gamma: 0.01
  - model: formaxext.3.1
    parameters:
      weight: [0.5, 0.2, 0.1, 0]
      density: 0.9
      gamma: 0.01
base_model: siithamol3.1
parameters:
  normalize: false
  int8_mask: true
merge_method: breadcrumbs
dtype: float32
out_dtype: bfloat16
name: siithamov3
Downloads last month
18
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kromvault/L3.1-Siithamo-v0.3-8B