Edit model card

stablerp

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear DARE merge method using Undi95/Meta-Llama-3.1-8B-Claude as a base.

Models Merged

The following models were included in the merge:

  • merge/stablerp.c
  • merge/stablerp.a
  • merge/stablerp.b

Configuration

The following YAML configuration was used to produce this model:

base_model: Undi95/Meta-Llama-3.1-8B-Claude
dtype: float32
merge_method: dare_linear
out_dtype: bfloat16
parameters:
  int8_mask: 1.0
  normalize: 0.0
slices:
- sources:
  - layer_range: [0, 32]
    model: merge/stablerp.b
    parameters:
      weight: [1.0, 0.2]
  - layer_range: [0, 32]
    model: merge/stablerp.a
    parameters:
      weight: [0.1, 0.5]
  - layer_range: [0, 32]
    model: merge/stablerp.c
    parameters:
      weight: [0.1, 0.5]
  - layer_range: [0, 32]
    model: Undi95/Meta-Llama-3.1-8B-Claude
tokenizer_source: base
Downloads last month
22
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for kromcomp/L3-Ceto-Epith-StableRP-v0.1-8B

Finetuned
this model

Collection including kromcomp/L3-Ceto-Epith-StableRP-v0.1-8B