7B Role-Playing Merges
Collection
All of my 7B merges go here. Safetensors format.
•
9 items
•
Updated
This model is intended for role-playing and storywriting purposes.
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SLERP merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: NeverSleep/Noromaid-7B-0.4-DPO
layer_range: [0, 32]
- model: localfultonextractor/Erosumika-7B-v2
layer_range: [0, 32]
merge_method: slerp
base_model: localfultonextractor/Erosumika-7B-v2
parameters:
t:
- filter: self_attn
value: [0.5, 0.5, 0.5, 0.5, 0.5]
- filter: mlp
value: [0.5, 0.5, 0.5, 0.5, 0.5]
- value: 0.5
dtype: bfloat16