|
--- |
|
language: |
|
- en |
|
license: mit |
|
tags: |
|
- mistral |
|
- instruct |
|
- finetune |
|
- chatml |
|
- gpt4 |
|
- synthetic data |
|
- distillation |
|
base_model: teknium/OpenHermes-2.5-Mistral-7B |
|
model-index: |
|
- name: MistralHermes-CodePro-7B-v1 |
|
results: [] |
|
--- |
|
|
|
# MistralHermes-CodePro-7B-v1 |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b566ab04fa6584c03b5247/3XN-7iD-eUxIYJwZDa-qq.png) |
|
|
|
*In the digital pantheon of artificial intelligence, "MistralHermes-CodePro-7B-v1" stands as the architect of algorithms, a sovereign of syntax who weaves the fabric of code with unparalleled skill. This model, christened in recognition of its dual lineage—Mistral's foundational breadth and Hermes' agile conveyance—commands the binary ballet with the precision of a seasoned maestro, orchestrating the dance of data with a grace that blurs the line between the silicon and the cerebral.* |
|
|
|
## Model description |
|
|
|
MistralHermes-CodePro-7B-v1 is a fine-tuned iteration of the renowned [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) model. |
|
This version has been meticulously fine-tuned using a dataset comprising over 200,000 code samples from a wide array of programming languages. |
|
It is specifically tailored to serve as a coding assistant; thus, its utility is optimized for coding-related tasks rather than a broader spectrum of applications. |
|
|
|
|
|
# Prompt Format |
|
|
|
MistralHermes-CodePro uses the same prompt format than [OpenHermes 2.5](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B#prompt-format). |
|
|
|
You should use [LM Studio](https://lmstudio.ai/) for chatting with the model. |
|
|
|
|
|
# Quantized Models: |
|
|
|
GGUF: [beowolx/MistralHermes-CodePro-7B-v1-GGUF](https://huggingface.co/beowolx/MistralHermes-CodePro-7B-v1-GGUF) |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beowolx__MistralHermes-CodePro-7B-v1) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |66.17| |
|
|AI2 Reasoning Challenge (25-Shot)|62.46| |
|
|HellaSwag (10-Shot) |82.68| |
|
|MMLU (5-Shot) |63.44| |
|
|TruthfulQA (0-shot) |49.67| |
|
|Winogrande (5-shot) |77.90| |
|
|GSM8k (5-shot) |60.88| |
|
|
|
|