Edit model card

ExLlamaV2 quantization of ChaoticNeutrals/Bepis_9B

All credits go to ChaoticNeutrals.

Original model information:

Bepis

image/jpeg

A new 9B model from jeiku. This one is smart, proficient at markdown, knows when to stop talking, and is quite soulful. The merge was an equal 3 way split between https://huggingface.co/ChaoticNeutrals/Prodigy_7B, https://huggingface.co/Test157t/Prima-LelantaclesV6-7b, and https://huggingface.co/cgato/Thespis-CurtainCall-7b-v0.2.1

If there's any 7B to 11B merge or finetune you'd like to see, feel free to leave a message.

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: primathespis
        layer_range: [0, 20]
  - sources:
      - model: prodigalthespis
        layer_range: [12, 32]
merge_method: passthrough
dtype: float16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 62.40
AI2 Reasoning Challenge (25-Shot) 62.54
HellaSwag (10-Shot) 80.12
MMLU (5-Shot) 62.84
TruthfulQA (0-shot) 53.30
Winogrande (5-shot) 76.48
GSM8k (5-shot) 39.12
Downloads last month
14
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results