Edit model card

VerB-Etheria-55b

image/png

An attempt to make a functional goliath style merge to create a [Etheria] 55b-200k with two yi-34b-200k models, this is Version B or VerB, it is a Double Model Passthrough merge. with a 50/50 split between high performing models.

Roadmap:

Depending on quality, I Might private the other Version. Then generate a sacrificial 55b and perform a 55b Dare ties merge or Slerp merge.

1: If the Dual Model Merge performs well I will make a direct inverse of the config then merge.

2: If the single model performs well I will generate a 55b of the most performant model the either Slerp or Dare ties merge.

3: If both models perform well, then I will complete both 1 & 2 then change the naming scheme to match each of the new models.

Configuration

The following YAML configuration was used to produce this model:


dtype: bfloat16
slices:
- sources:
    - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
      layer_range: [0, 14]
- sources:
    - model: one-man-army/UNA-34Beagles-32K-bf16-v1
      layer_range: [7, 21]
- sources:
    - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
      layer_range: [15, 29]
- sources:
    - model: one-man-army/UNA-34Beagles-32K-bf16-v1
      layer_range: [22, 36]
- sources:
    - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
      layer_range: [30, 44]
- sources:
    - model: one-man-army/UNA-34Beagles-32K-bf16-v1
      layer_range: [37, 51]
- sources:
    - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
      layer_range: [45, 59]
merge_method: passthrough

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 63.83
AI2 Reasoning Challenge (25-Shot) 65.96
HellaSwag (10-Shot) 81.48
MMLU (5-Shot) 73.78
TruthfulQA (0-shot) 57.52
Winogrande (5-shot) 75.45
GSM8k (5-shot) 28.81
Downloads last month
29
Safetensors
Model size
55.6B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for SteelStorage/VerB-Etheria-55b

Evaluation results