cgus
/

GGUF
English
conversational
cgus commited on
Commit
fa6cfc2
1 Parent(s): cfd0483

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +217 -0
README.md ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - c-s-ale/alpaca-gpt4-data
4
+ - Open-Orca/OpenOrca
5
+ - Intel/orca_dpo_pairs
6
+ - allenai/ultrafeedback_binarized_cleaned
7
+ language:
8
+ - en
9
+ license: cc-by-nc-4.0
10
+ inference: false
11
+ base_model:
12
+ - CallComply/SOLAR-10.7B-Instruct-v1.0-128k
13
+ ---
14
+ ## SOLAR-10.7B-Instruct-v1.0-128k-GGUF
15
+ Model: [SOLAR-10.7B-Instruct-v1.0-128k](https://huggingface.co/CallComply/SOLAR-10.7B-Instruct-v1.0-128k)
16
+ Made by: [CallComply](https://huggingface.co/CallComply)
17
+
18
+ Based on original model: [SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0)
19
+ Created by: [upstage](https://huggingface.co/upstage)
20
+
21
+ ## All quants made with iMatrix:
22
+ [IQ2_XS](https://huggingface.co/cgus/SOLAR-10.7B-Instruct-v1.0-128k-iMat-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-128k-IQ2_XS.gguf?download=true)
23
+ [IQ3_K_M](https://huggingface.co/cgus/SOLAR-10.7B-Instruct-v1.0-128k-iMat-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-128k-IQ3_M_imat?download=true)
24
+ [Q4_K_M](https://huggingface.co/cgus/SOLAR-10.7B-Instruct-v1.0-128k-iMat-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-128k-Q4_K_M_imat.gguf?download=true)
25
+ [Q5_K_M](https://huggingface.co/cgus/SOLAR-10.7B-Instruct-v1.0-128k-iMat-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-128k-Q5_K_M_imat.gguf?download=true)
26
+ [Q6_K](https://huggingface.co/cgus/SOLAR-10.7B-Instruct-v1.0-128k-iMat-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-128k-Q6_K_imat.gguf?download=true)
27
+ [Q8_0](https://huggingface.co/cgus/SOLAR-10.7B-Instruct-v1.0-128k-iMat-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-128k-Q8_0_imat.gguf?download=true)
28
+
29
+ If someone wants to make their own quants, here's my iMatrix file:
30
+ [imatrix.dat](https://huggingface.co/cgus/SOLAR-10.7B-Instruct-v1.0-128k-iMat-GGUF/resolve/main/imatrix.dat)
31
+ As well as the original FP16 GGUF (in another repo):
32
+ [FP16](https://huggingface.co/cgus/SOLAR-10.7B-Instruct-v1.0-128k-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-128k-FP16.gguf?download=true)
33
+
34
+ ## Quantization notes
35
+ This repo is an alternative version for [SOLAR-10.7B-Instruct-v1.0-128k-GGUF](https://huggingface.co/cgus/SOLAR-10.7B-Instruct-v1.0-128k-GGUF) with additional iMatrix calibration.
36
+ This is a quantized model with base 8k context expanded by 16x YARN scaling with a potential 128k max context.
37
+ All quants were made with llama.cpp b2700 and calibrated with iMatrix file for higher quality quants.
38
+ I used a copy of the default Exllamav2 dataset with diverse data as iMatrix dataset.
39
+
40
+ ## How to run
41
+
42
+ Should be compatible with any app that supports GGUF format:
43
+
44
+ [llama.cpp](https://github.com/ggerganov/llama.cpp)
45
+ [Text Generation Webui](https://github.com/oobabooga/text-generation-webui)
46
+ [KoboldCPP](https://github.com/LostRuins/koboldcpp)
47
+ [LM Studio](https://lmstudio.ai/)
48
+ [Jan](https://jan.ai/)
49
+ And many others.
50
+
51
+ ## Original model card:
52
+ # **Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!**
53
+
54
+ # **With 128k Context!**
55
+
56
+ **(This model is [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) fine-tuned version for single-turn conversation.)**
57
+
58
+
59
+ # **Introduction**
60
+ We introduce SOLAR-10.7B, an advanced large language model (LLM) with 10.7 billion parameters, demonstrating superior performance in various natural language processing (NLP) tasks. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B.
61
+
62
+ We present a methodology for scaling LLMs called depth up-scaling (DUS) , which encompasses architectural modifications and continued pretraining. In other words, we integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model.
63
+
64
+
65
+ SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table.
66
+ Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements.
67
+
68
+ For full details of this model please read our [paper](https://arxiv.org/abs/2312.15166).
69
+
70
+
71
+ # **Instruction Fine-Tuning Strategy**
72
+
73
+ We utilize state-of-the-art instruction fine-tuning methods including supervised fine-tuning (SFT) and direct preference optimization (DPO) [1].
74
+
75
+ We used a mixture of the following datasets
76
+ - c-s-ale/alpaca-gpt4-data (SFT)
77
+ - Open-Orca/OpenOrca (SFT)
78
+ - in-house generated data utilizing Metamath [2] (SFT, DPO)
79
+ - Intel/orca_dpo_pairs (DPO)
80
+ - allenai/ultrafeedback_binarized_cleaned (DPO)
81
+
82
+ where we were careful of data contamination by not using GSM8K samples when generating data and filtering tasks when applicable via the following list.
83
+ ```python
84
+ filtering_task_list = [
85
+ 'task228_arc_answer_generation_easy',
86
+ 'ai2_arc/ARC-Challenge:1.0.0',
87
+ 'ai2_arc/ARC-Easy:1.0.0',
88
+ 'task229_arc_answer_generation_hard',
89
+ 'hellaswag:1.1.0',
90
+ 'task1389_hellaswag_completion',
91
+ 'cot_gsm8k',
92
+ 'cot_gsm8k_ii',
93
+ 'drop:2.0.0',
94
+ 'winogrande:1.1.0'
95
+ ]
96
+ ```
97
+
98
+ Using the datasets mentioned above, we applied SFT and iterative DPO training, a proprietary alignment strategy, to maximize the performance of our resulting model.
99
+
100
+ [1] Rafailov, R., Sharma, A., Mitchell, E., Ermon, S., Manning, C.D. and Finn, C., 2023. Direct preference optimization: Your language model is secretly a reward model. NeurIPS.
101
+
102
+ [2] Yu, L., Jiang, W., Shi, H., Yu, J., Liu, Z., Zhang, Y., Kwok, J.T., Li, Z., Weller, A. and Liu, W., 2023. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284.
103
+
104
+ # **Data Contamination Test Results**
105
+
106
+ Recently, there have been contamination issues in some models on the LLM leaderboard.
107
+ We note that we made every effort to exclude any benchmark-related datasets from training.
108
+ We also ensured the integrity of our model by conducting a data contamination test [3] that is also used by the HuggingFace team [4, 5].
109
+
110
+ Our results, with `result < 0.1, %:` being well below 0.9, indicate that our model is free from contamination.
111
+
112
+ *The data contamination test results of HellaSwag and Winograde will be added once [3] supports them.*
113
+
114
+ | Model | ARC | MMLU | TruthfulQA | GSM8K |
115
+ |------------------------------|-------|-------|-------|-------|
116
+ | **SOLAR-10.7B-Instruct-v1.0**| result < 0.1, %: 0.06 |result < 0.1, %: 0.15 | result < 0.1, %: 0.28 | result < 0.1, %: 0.70 |
117
+
118
+ [3] https://github.com/swj0419/detect-pretrain-code-contamination
119
+
120
+ [4] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474#657f2245365456e362412a06
121
+
122
+ [5] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/265#657b6debf81f6b44b8966230
123
+
124
+ # **Evaluation Results**
125
+
126
+ | Model | H6 | Model Size |
127
+ |----------------------------------------|-------|------------|
128
+ | **SOLAR-10.7B-Instruct-v1.0** | **74.20** | **~ 11B** |
129
+ | mistralai/Mixtral-8x7B-Instruct-v0.1 | 72.62 | ~ 46.7B |
130
+ | 01-ai/Yi-34B-200K | 70.81 | ~ 34B |
131
+ | 01-ai/Yi-34B | 69.42 | ~ 34B |
132
+ | mistralai/Mixtral-8x7B-v0.1 | 68.42 | ~ 46.7B |
133
+ | meta-llama/Llama-2-70b-hf | 67.87 | ~ 70B |
134
+ | tiiuae/falcon-180B | 67.85 | ~ 180B |
135
+ | **SOLAR-10.7B-v1.0** | **66.04** | **~11B** |
136
+ | mistralai/Mistral-7B-Instruct-v0.2 | 65.71 | ~ 7B |
137
+ | Qwen/Qwen-14B | 65.86 | ~ 14B |
138
+ | 01-ai/Yi-34B-Chat | 65.32 | ~34B |
139
+ | meta-llama/Llama-2-70b-chat-hf | 62.4 | ~ 70B |
140
+ | mistralai/Mistral-7B-v0.1 | 60.97 | ~ 7B |
141
+ | mistralai/Mistral-7B-Instruct-v0.1 | 54.96 | ~ 7B |
142
+
143
+ # **Usage Instructions**
144
+
145
+ This model has been fine-tuned primarily for single-turn conversation, making it less suitable for multi-turn conversations such as chat.
146
+
147
+ ### **Version**
148
+
149
+ Make sure you have the correct version of the transformers library installed:
150
+
151
+ ```sh
152
+ pip install transformers==4.35.2
153
+ ```
154
+
155
+ ### **Loading the Model**
156
+
157
+ Use the following Python code to load the model:
158
+
159
+ ```python
160
+ import torch
161
+ from transformers import AutoModelForCausalLM, AutoTokenizer
162
+
163
+ tokenizer = AutoTokenizer.from_pretrained("Upstage/SOLAR-10.7B-Instruct-v1.0")
164
+ model = AutoModelForCausalLM.from_pretrained(
165
+ "Upstage/SOLAR-10.7B-Instruct-v1.0",
166
+ device_map="auto",
167
+ torch_dtype=torch.float16,
168
+ )
169
+ ```
170
+
171
+ ### **Conducting Single-Turn Conversation**
172
+
173
+ ```python
174
+ conversation = [ {'role': 'user', 'content': 'Hello?'} ]
175
+
176
+ prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
177
+
178
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
179
+ outputs = model.generate(**inputs, use_cache=True, max_length=4096)
180
+ output_text = tokenizer.decode(outputs[0])
181
+ print(output_text)
182
+ ```
183
+
184
+ Below is an example of the output.
185
+ ```
186
+ <s> ### User:
187
+ Hello?
188
+
189
+ ### Assistant:
190
+ Hello, how can I assist you today? Please feel free to ask any questions or request help with a specific task.</s>
191
+ ```
192
+
193
+ ### **License**
194
+ - [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0): apache-2.0
195
+ - [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0): cc-by-nc-4.0
196
+ - Since some non-commercial datasets such as Alpaca are used for fine-tuning, we release this model as cc-by-nc-4.0.
197
+
198
+ ### **How to Cite**
199
+
200
+ Please cite this model using this format.
201
+
202
+ ```bibtex
203
+ @misc{kim2023solar,
204
+ title={SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling},
205
+ author={Dahyun Kim and Chanjun Park and Sanghoon Kim and Wonsung Lee and Wonho Song and Yunsu Kim and Hyeonwoo Kim and Yungi Kim and Hyeonju Lee and Jihoo Kim and Changbae Ahn and Seonghoon Yang and Sukyung Lee and Hyunbyung Park and Gyoungjin Gim and Mikyoung Cha and Hwalsuk Lee and Sunghun Kim},
206
+ year={2023},
207
+ eprint={2312.15166},
208
+ archivePrefix={arXiv},
209
+ primaryClass={cs.CL}
210
+ }
211
+ ```
212
+
213
+ ### **The Upstage AI Team** ###
214
+ Upstage is creating the best LLM and DocAI. Please find more information at https://upstage.ai
215
+
216
+ ### **Contact Us** ###
217
+ Any questions and suggestions, please use the discussion tab. If you want to contact us directly, drop an email to [[email protected]](mailto:[email protected])