metadata
license: apache-2.0
Mixtral-8x7B--v0.1: Model 2
Model Description
This model is the 2nd extracted standalone model from the mistralai/Mixtral-8x7B-v0.1, using the Mixtral Model Expert Extractor tool I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.
Model Architecture
The architecture of this model includes:
- Multi-head attention layers derived from the base Mixtral model.
- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.
- Additional layers and components as required to ensure the model's functionality outside the MoE framework.
Example
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "DrNicefellow/Mistral-2-from-Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
text = "Today is a pleasant"
input_ids = tokenizer.encode(text, return_tensors='pt')
output = model.generate(input_ids)
print(tokenizer.decode(output[0], skip_special_tokens=True))
License
This model is available under the Apache 2.0 License.
Discord Server
Join our Discord server here.
License
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.