Text Generation
Transformers
Safetensors
mistral
conversational
text-generation-inference
Inference Endpoints
File size: 1,208 Bytes
42ac13a
 
1dbbf47
 
 
 
42ac13a
 
211ef77
 
1dbbf47
42ac13a
1dbbf47
42ac13a
1dbbf47
 
 
4648012
 
41b4e17
f557d97
 
4648012
 
 
 
 
f557d97
 
 
 
6fa5128
f989d3c
6fa5128
f989d3c
6d4d057
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
license: apache-2.0
datasets:
- abacusai/MetaMathFewshot
- shahules786/orca-chat
- anon8231489123/ShareGPT_Vicuna_unfiltered
---

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/pf4d6FA7DriRtVq5HCkxd.png)

Trained on the MetamathFewshot (https://huggingface.co/datasets/abacusai/MetaMathFewshot) dataset from base Mistral, as well as the Vicuna (https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) dataset and the OrcaChat (https://huggingface.co/datasets/shahules786/orca-chat) dataset.

Instruction tuned with the following parameters:

- LORA, Rank 8, Alpha 16, Dropout 0.05, all modules (QKV and MLP)
- 3 epochs
- Micro Batch Size 32 over 4xH100, gradient accumulation steps = 1
- AdamW with learning rate 5e-5

# Evaluation Results

### HuggingFace Leaderboard

| Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- |
| 67.33    | 59.64 | 81.82 | 61.69 | 53.23 | 78.45 | 69.14 |

For comparison the GSM8K score for the original `metamath/MetaMath-Mistral-7B` was 68.84 and average score was 65.78.

### MT-Bench

First Turn: 6.9

Second Turn: 6.51875

**Average: 6.709375**