leaderboard-pr-bot's picture
Adding Evaluation Results
db7130f
|
raw
history blame
2.86 kB
---
base_model: mistralai/Mistral-7B-v0.1
tags:
- mistral-7b
- instruct
- finetune
- gpt4
- synthetic data
- distillation
model-index:
- name: Mistral-Trismegistus-7B
results: []
license: apache-2.0
language:
- en
---
**Mistral Trismegistus 7B**
<div style="display: flex; justify-content: center;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/3VJvztFDB1XOWfShuHnb6.png" alt="Mistral Trismegistus" width="50%" style="display: block; margin: 0 auto;">
</div>
## Model Description:
Transcendence is All You Need! Mistral Trismegistus is a model made for people interested in the esoteric, occult, and spiritual.
Here are some outputs:
Answer questions about occult artifacts:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/WeLd-zbZVwRe6HjxyRYMh.png)
Play the role of a hypnotist:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/fdYQacOpD3GrLCdgCrMLu.png)
## Special Features:
- **The First Powerful Occult Expert Model**: ~10,000 high quality, deep, rich, instructions on the occult, esoteric, and spiritual.
- **Fast**: Trained on Mistral, a state of the art 7B parameter model, you can run this model FAST on even a cpu.
- **Not a positivity-nazi**: This model was trained on all forms of esoteric tasks and knowledge, and is not burdened by the flowery nature of many other models, who chose positivity over creativity.
## Acknowledgements:
Special thanks to @a16z.
## Dataset:
This model was trained on a 100% synthetic, gpt-4 generated dataset, about ~10,000 examples, on a wide and diverse set of both tasks and knowledge about the esoteric, occult, and spiritual.
The dataset will be released soon!
## Usage:
Prompt Format:
```
USER: <prompt>
ASSISTANT:
```
OR
```
<system message>
USER: <prompt>
ASSISTANT:
```
## Benchmarks:
No benchmark can capture the nature and essense of the quality of spirituality and esoteric knowledge and tasks. You will have to try testing it yourself!
Training run on wandb here: https://wandb.ai/teknium1/occult-expert-mistral-7b/runs/coccult-expert-mistral-6/overview
## Licensing:
Apache 2.0
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_teknium__Mistral-Trismegistus-7B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 46.17 |
| ARC (25-shot) | 54.1 |
| HellaSwag (10-shot) | 77.91 |
| MMLU (5-shot) | 54.49 |
| TruthfulQA (0-shot) | 49.36 |
| Winogrande (5-shot) | 70.17 |
| GSM8K (5-shot) | 9.93 |
| DROP (3-shot) | 7.24 |