File size: 834 Bytes
8d5e52e
 
 
 
48c8cd8
 
 
 
 
df57ce8
4d6217b
e159f5f
 
 
bafb77e
e159f5f
bafb77e
e159f5f
bafb77e
e159f5f
bafb77e
e159f5f
bafb77e
e159f5f
bafb77e
e159f5f
 
df57ce8
 
 
 
 
 
a5dbe20
df57ce8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
license: mit
---

|    Task     |Version| Metric |Value |   |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge|      0|acc     |0.6527|±  |0.0139|
|             |       |acc_norm|0.6869|±  |0.0136|

A passthrough merge of OpenHermes-2.5-neural-chat-7b-v3-1 and Bruins-V2. To be updated.

Template: ChatML

My settings:

Temperature: 0.7-0.8

Min_p: 0.12

Top_K: 0

Repetition Penalty: 1.16

Mirostat Tau: 2.5-3

Mirostat Eta: 0.12

Personal Thoughts:

- The model sometimes throws wrong tags, you can add those to "Custom stopping strings" in Oobabooga.
- Output with Mirostat consistently felt smarter than a set Top_K rate.

Note: The model is hallucinating hard in chat mode for me in some instances, like writing adblocker messages. Kind of funny. 

I am not sure which dataset involved was poisoned.