Suparious's picture
Updated base_model tag in README.md
cecd1b1 verified
metadata
tags:
  - quantized
  - 4-bit
  - AWQ
  - autotrain_compatible
  - endpoints_compatible
  - text-generation-inference
license: apache-2.0
language:
  - en
base_model: Vezora/Mistral-22B-v0.2
model_creator: Vezora
model_name: Mistral-22B-v0.2
model_type: mistral
pipeline_tag: text-generation
inference: false

Vezora/Mistral-22B-v0.1 AWQ

Model Summary

  • Just two days after our release of Mistral-22b-v0.1, we are excited to introduce our handcrafted experimental model, Mistral-22b-v.02. This model is a culmination of equal knowledge distilled from all experts into a single, dense 22b model. This model is not a single trained expert, rather its a compressed MOE model, turning it into a dense 22b mode. This is the first working MOE to Dense model conversion.
  • v0.2 has trained on 8x more data than v0.1!

How to use

GUANACO PROMPT FORMAT YOU MUST USE THE GUANACO PROMPT FORMAT SHOWN BELOW. Not using this prompt format will lead to sub optimal results.

  • This model requires a specific chat template, as the training format was Guanaco this is what it looks like:
  • "### System: You are a helpful assistant. ### Human###: Give me the best chili recipe you can ###Assistant: Here is the best chili recipe..."