File size: 1,318 Bytes
c3ae0fc 3e7bf68 c3ae0fc 1cfe03a c3ae0fc 1cfe03a c3ae0fc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
base_model:
- nothingiisreal/L3.1-8B-Celeste-V1.5
- Sao10K/Llama-3.1-8B-Stheno-v3.4
- Sao10K/L3.1-8B-Niitama-v1.1
- arcee-ai/Llama-3.1-SuperNova-Lite
- akjindal53244/Llama-3.1-Storm-8B
- arcee-ai/Llama-Spark
- grimjim/Llama-3-Instruct-abliteration-LoRA-8B
- crestf411/sunfall-peft
tags:
- llama
- merge
- llama3
- mixtral
library_name: transformers
---
# Llama-3.1-Celestial-Stone-2x8B (BF16)
* *Mixture of Experts (14B).*
Both experts are used in tandem when generating a token.
------------------------------------------------------------------------------
*The first expert* is Instruct 405B distillation/RP vector merge <b>(Supernova-Lite, Niitama1.1, Storm)</b>
*The second expert* is ERP/Reddit data merge <b>(Celeste1.5, Stheno3.4, Storm)</b>
-------------------------------------------------------------------------------
*The base model* is <b>Sao10k/L3.1-Stheno-3.4</b> with the <b>Sunfall LoRa 0.6.1</b> to make it understand SillyTavern prompts and storywriting better.
-------------------------------------------------------------------------------
# Prompt Template:
```bash
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
``` |