Post
1942
We just released a new MoE model (meraGPT/mera-mix-4x7B) that is half as large as Mixtral-8x7B while still been competitive with it across different benchmarks. mera-mix-4x7B achieves 76.37 on the open LLM eval.
You can check mera-mix-4x7B out on HF here - meraGPT/mera-mix-4x7B
You can check mera-mix-4x7B out on HF here - meraGPT/mera-mix-4x7B