maid-yuzu-v7
This is a merge of pre-trained language models created using mergekit.
I don't know anything about merges, so this may be a stupid method, but I was curious how the models would be merged if I took this approach.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
This model is a model that first merges Model Orochi with Model dolphin with a 0.15 SLERP option, and then merges Model BagelMIsteryTour with a 0.2 SLERP option based on the merged model.
Models Merged
The following models were included in the merge:
- ycros/BagelMIsteryTour-v2-8x7B
- ../maid-yuzu-v7-base
Configuration
The following YAML configuration was used to produce this model:
base_model:
model:
path: ../maid-yuzu-v7-base
dtype: bfloat16
merge_method: slerp
parameters:
t:
- value: 0.2
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: ../maid-yuzu-v7-base
- layer_range: [0, 32]
model:
model:
path: ycros/BagelMIsteryTour-v2-8x7B
- Downloads last month
- 21
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.