Mega-Destroyer-8x7B / README.md
MrDragonFox's picture
Upload folder using huggingface_hub
7f58f27 verified
|
raw
history blame
2.3 kB
metadata
base_model:
  - mistralai/Mixtral-8x7B-v0.1
  - Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
  - mistralai/Mixtral-8x7B-v0.1
  - LoneStriker/Air-Striker-Mixtral-8x7B-ZLoss-LoRA
  - rombodawg/Open_Gpt4_8x7B_v0.2
  - mistralai/Mixtral-8x7B-Instruct-v0.1
  - mistralai/Mixtral-8x7B-v0.1
  - Sao10K/Typhon-Mixtral-v1
tags:
  - mergekit
  - merge

mergeout

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using mistralai/Mixtral-8x7B-v0.1 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: mistralai/Mixtral-8x7B-Instruct-v0.1
    parameters:
      density: 0.6
      weight: 1.0
  - model: rombodawg/Open_Gpt4_8x7B_v0.2
    parameters:
      density: 0.5
      weight: 0.8
  - model: mistralai/Mixtral-8x7B-v0.1+LoneStriker/Air-Striker-Mixtral-8x7B-ZLoss-LoRA
    parameters:
      density: 0.5
      weight: 0.6        
  - model: Sao10K/Typhon-Mixtral-v1
    parameters:
      density: 0.5
      weight: 0.7
  - model: mistralai/Mixtral-8x7B-v0.1+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
    parameters:
      density: 0.5
      weight: 0.4      
merge_method: dare_ties
base_model: mistralai/Mixtral-8x7B-v0.1
parameters:
  normalize: true
  int8_mask: true
dtype: bfloat16
name: Mega-Destroyer-8x7B