Edit model card

Open_Gpt4 cc


VERSION 0.2 OUT NOW:

Fp16:

q8_0.gguf:


image/jpeg

This model is a TIES merger of notux-8x7b-v1 and UNAversal-8x7B-v1beta with MixtralOrochi8x7B being the Base model.

I was very impressed with MixtralOrochi8x7B performance and multifaceted usecases as it is already a merger of many usefull Mixtral models such as Mixtral instruct,

Noromaid-v0.1-mixtral, openbuddy-mixtral and possibly other models that were not named. My goal was to expand the models capabilities and make it even more useful of a model, maybe even competitive with closed source models like Gpt-4. But for that more testing is required. I hope the community can help me determine if its deserving of its name. ๐Ÿ˜Š

Base model:

Merged models:

Instruct template: Alpaca

Merger config:

models:
  - model: notux-8x7b-v1
    parameters:
      density: .5
      weight: 1
  - model: UNAversal-8x7B-v1beta
    parameters:
      density: .5
      weight: 1


merge_method: ties
base_model: MixtralOrochi8x7B
parameters:
  normalize: true
  int8_mask: true
dtype: float16
Downloads last month
32
Safetensors
Model size
46.7B params
Tensor type
FP16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for rombodawg/Open_Gpt4_8x7B_v0.1

Quantizations
3 models

Space using rombodawg/Open_Gpt4_8x7B_v0.1 1