Edit model card

Oracle 14B

Oracle 14B is a MoE model from NousResearch/Hermes-3-Llama-3.1-8B and nvidia/OpenMath2-Llama3.1-8B.

Model Details

The merge config used for this model is:

base_model: unsloth/Meta-Llama-3.1-8B
experts:
  - source_model: NousResearch/Hermes-3-Llama-3.1-8B
    positive_prompts:
    - "explain"
    - "tell me"
    - "writing"
    - "creativity"
    - "assistant"
  - source_model: nvidia/OpenMath2-Llama3.1-8B
    positive_prompts:
    - "reason"
    - "math"
    - "formula"
    - "solve"
    - "count"
tokenizer_source: union
Downloads last month
31
Safetensors
Model size
13.7B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for qingy2019/Oracle-14B

Quantizations
1 model