Suparious commited on
Commit
9808752
1 Parent(s): b407270

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +154 -0
README.md ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ model-index:
4
+ - name: Chupacabra-7B-v2.01
5
+ results:
6
+ - task:
7
+ type: text-generation
8
+ name: Text Generation
9
+ dataset:
10
+ name: AI2 Reasoning Challenge (25-Shot)
11
+ type: ai2_arc
12
+ config: ARC-Challenge
13
+ split: test
14
+ args:
15
+ num_few_shot: 25
16
+ metrics:
17
+ - type: acc_norm
18
+ value: 68.86
19
+ name: normalized accuracy
20
+ source:
21
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2.01
22
+ name: Open LLM Leaderboard
23
+ - task:
24
+ type: text-generation
25
+ name: Text Generation
26
+ dataset:
27
+ name: HellaSwag (10-Shot)
28
+ type: hellaswag
29
+ split: validation
30
+ args:
31
+ num_few_shot: 10
32
+ metrics:
33
+ - type: acc_norm
34
+ value: 86.12
35
+ name: normalized accuracy
36
+ source:
37
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2.01
38
+ name: Open LLM Leaderboard
39
+ - task:
40
+ type: text-generation
41
+ name: Text Generation
42
+ dataset:
43
+ name: MMLU (5-Shot)
44
+ type: cais/mmlu
45
+ config: all
46
+ split: test
47
+ args:
48
+ num_few_shot: 5
49
+ metrics:
50
+ - type: acc
51
+ value: 63.9
52
+ name: accuracy
53
+ source:
54
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2.01
55
+ name: Open LLM Leaderboard
56
+ - task:
57
+ type: text-generation
58
+ name: Text Generation
59
+ dataset:
60
+ name: TruthfulQA (0-shot)
61
+ type: truthful_qa
62
+ config: multiple_choice
63
+ split: validation
64
+ args:
65
+ num_few_shot: 0
66
+ metrics:
67
+ - type: mc2
68
+ value: 63.5
69
+ source:
70
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2.01
71
+ name: Open LLM Leaderboard
72
+ - task:
73
+ type: text-generation
74
+ name: Text Generation
75
+ dataset:
76
+ name: Winogrande (5-shot)
77
+ type: winogrande
78
+ config: winogrande_xl
79
+ split: validation
80
+ args:
81
+ num_few_shot: 5
82
+ metrics:
83
+ - type: acc
84
+ value: 80.51
85
+ name: accuracy
86
+ source:
87
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2.01
88
+ name: Open LLM Leaderboard
89
+ - task:
90
+ type: text-generation
91
+ name: Text Generation
92
+ dataset:
93
+ name: GSM8k (5-shot)
94
+ type: gsm8k
95
+ config: main
96
+ split: test
97
+ args:
98
+ num_few_shot: 5
99
+ metrics:
100
+ - type: acc
101
+ value: 59.67
102
+ name: accuracy
103
+ source:
104
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2.01
105
+ name: Open LLM Leaderboard
106
+ tags:
107
+ - quantized
108
+ - 4-bit
109
+ - AWQ
110
+ - text-generation
111
+ - autotrain_compatible
112
+ - endpoints_compatible
113
+ - chatml
114
+ pipeline_tag: text-generation
115
+ inference: false
116
+ quantized_by: Suparious
117
+ ---
118
+ # perlthoughts/Chupacabra-7B-v2.01 AWQ
119
+
120
+ - Model creator: [perlthoughts](https://huggingface.co/perlthoughts)
121
+ - Original model: [Chupacabra-7B-v2.01](https://huggingface.co/perlthoughts/Chupacabra-7B-v2.01)
122
+
123
+ <p><img src="https://huggingface.co/perlthoughts/Chupacabra-7B/resolve/main/chupacabra7b%202.png" width=330></p>
124
+
125
+ ## Model Summary
126
+
127
+ Dare-ties merge method.
128
+
129
+ List of all models and merging path is coming soon.
130
+
131
+ ## Purpose
132
+
133
+ Merging the "thick"est model weights from mistral models using amazing training methods like direct preference optimization (dpo) and reinforced learning.
134
+
135
+ I have spent countless hours studying the latest research papers, attending conferences, and networking with experts in the field. I experimented with different algorithms, tactics, fine-tuned hyperparameters, optimizers,
136
+ and optimized code until i achieved the best possible results.
137
+
138
+ Thank you openchat 3.5 for showing me the way.
139
+
140
+ Here is my contribution.
141
+
142
+ ## Prompt Template
143
+
144
+ Replace {system} with your system prompt, and {prompt} with your prompt instruction.
145
+
146
+ ```
147
+ ### System:
148
+ {system}
149
+
150
+ ### User:
151
+ {prompt}
152
+
153
+ ### Assistant:
154
+ ```