Edit model card

Gen Settings & Prompting

https://rentry.org/tsukasamodel

GGUF

little endian

Training

axolotl was used for training on a 4x nvidia a100 gpu cluster.

the a100 GPU cluster has been graciously provided by lloorree.

rank 16 qlora (all modules) tune

base model mistralai/Mixtral-8x7B-v0.1 tuned on koishi commit 6e675d1 for one epoch

then tuned on pippa 6412b0c for one epoch (metharme completion)

then tuned on limarp Version 2023-10-19 for 2 epochs in metharme completion format with limit_data_length set to 32768 in dataprepare-templates.py

Downloads last month
38
GGUF
Model size
46.7B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Datasets used to train ludis/tsukasa-8x7b-qlora-gguf

Collection including ludis/tsukasa-8x7b-qlora-gguf