Edit model card

image/jpeg

Helion-4x34B

This is the model for Helion-4x34B. I used this repo to make this MOE model.

Prompt Template(s):

Since bagel-dpo-34b-v0.2 uses many prompt templates, you can utilize prompt templates provided by bagel and other expert's prompt templates.

Note: I currently do not know which prompt template is best.

ChatML:

<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>

Human Asistant

Human: {user}

### Assistant: {asistant}

Alpaca (sort of)

Below is an instruction that describes a task.  Write a response that appropriately completes the request.

### Instruction:
{system}
{instruction}

### Response:

Vicuna

{system}
USER: {instruction}
ASSISTANT: 

Visit bagel-dpo-34b-v0.2 to try more prompt templates.

Yaml Config to reproduce

base_model: nontoxic-bagel-34b-v0.2
gate_mode: hidden
dtype: bfloat16

experts:
  - source_model: bagel-dpo-34b-v0.2
    positive_prompts: ["question answering", "Q:", science", "biology", "chemistry", "physics"]
    negative_prompts: ["math", "reason", "mathematics", "solve", "count", "code", "python", "javascript", "programming", "algorithm"]

  - source_model: Nous-Hermes-2-Yi-34B
    positive_prompts: ["chat", "math", "reason", "mathematics", "solve", "count", "python", "javascript", "programming", "algorithm", "tell me", "assistant"]

  - source_model: SUS-Chat-34B
    positive_prompts: ["math", "reason", "mathematics", "solve", "count", "assistant"]

  - source_model: platypus-yi-34b
    positive_prompts: [""]
    negative_prompts: ["math", "reason", "mathematics", "solve", "count"]

Quantizationed versions

Quantizationed versions of this model is available thanks to TheBloke.

GPTQ
GGUF
AWQ

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 75.48
AI2 Reasoning Challenge (25-Shot) 69.71
HellaSwag (10-Shot) 85.28
MMLU (5-Shot) 77.33
TruthfulQA (0-shot) 63.91
Winogrande (5-shot) 84.37
GSM8k (5-shot) 72.25

If you would like to support me:

☕ Buy Me a Coffee

Downloads last month
1,186
Safetensors
Model size
114B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Weyaxi/Helion-4x34B

Quantizations
2 models

Space using Weyaxi/Helion-4x34B 1

Collection including Weyaxi/Helion-4x34B

Evaluation results