Suparious commited on
Commit
e80910b
1 Parent(s): e0d1264

Add model card

Browse files
Files changed (1) hide show
  1. README.md +214 -1
README.md CHANGED
@@ -1,3 +1,216 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: other
5
+ model-index:
6
+ - name: Luna_7B
7
+ results:
8
+ - task:
9
+ type: text-generation
10
+ name: Text Generation
11
+ dataset:
12
+ name: AI2 Reasoning Challenge (25-Shot)
13
+ type: ai2_arc
14
+ config: ARC-Challenge
15
+ split: test
16
+ args:
17
+ num_few_shot: 25
18
+ metrics:
19
+ - type: acc_norm
20
+ value: 68.86
21
+ name: normalized accuracy
22
+ source:
23
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jeiku/Luna_7B
24
+ name: Open LLM Leaderboard
25
+ - task:
26
+ type: text-generation
27
+ name: Text Generation
28
+ dataset:
29
+ name: HellaSwag (10-Shot)
30
+ type: hellaswag
31
+ split: validation
32
+ args:
33
+ num_few_shot: 10
34
+ metrics:
35
+ - type: acc_norm
36
+ value: 86.28
37
+ name: normalized accuracy
38
+ source:
39
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jeiku/Luna_7B
40
+ name: Open LLM Leaderboard
41
+ - task:
42
+ type: text-generation
43
+ name: Text Generation
44
+ dataset:
45
+ name: MMLU (5-Shot)
46
+ type: cais/mmlu
47
+ config: all
48
+ split: test
49
+ args:
50
+ num_few_shot: 5
51
+ metrics:
52
+ - type: acc
53
+ value: 64.06
54
+ name: accuracy
55
+ source:
56
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jeiku/Luna_7B
57
+ name: Open LLM Leaderboard
58
+ - task:
59
+ type: text-generation
60
+ name: Text Generation
61
+ dataset:
62
+ name: TruthfulQA (0-shot)
63
+ type: truthful_qa
64
+ config: multiple_choice
65
+ split: validation
66
+ args:
67
+ num_few_shot: 0
68
+ metrics:
69
+ - type: mc2
70
+ value: 58.09
71
+ source:
72
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jeiku/Luna_7B
73
+ name: Open LLM Leaderboard
74
+ - task:
75
+ type: text-generation
76
+ name: Text Generation
77
+ dataset:
78
+ name: Winogrande (5-shot)
79
+ type: winogrande
80
+ config: winogrande_xl
81
+ split: validation
82
+ args:
83
+ num_few_shot: 5
84
+ metrics:
85
+ - type: acc
86
+ value: 79.08
87
+ name: accuracy
88
+ source:
89
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jeiku/Luna_7B
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: GSM8k (5-shot)
96
+ type: gsm8k
97
+ config: main
98
+ split: test
99
+ args:
100
+ num_few_shot: 5
101
+ metrics:
102
+ - type: acc
103
+ value: 64.67
104
+ name: accuracy
105
+ source:
106
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jeiku/Luna_7B
107
+ name: Open LLM Leaderboard
108
+ library_name: transformers
109
+ model_creator: ResplendentAI
110
+ model_name: Luna-7B
111
+ model_type: mistral
112
+ pipeline_tag: text-generation
113
+ inference: false
114
+ prompt_template: '<|im_start|>system
115
+
116
+ {system_message}<|im_end|>
117
+
118
+ <|im_start|>user
119
+
120
+ {prompt}<|im_end|>
121
+
122
+ <|im_start|>assistant
123
+
124
+ '
125
+ quantized_by: Suparious
126
  ---
127
+ # jeiku/Luna-7B AWQ
128
+
129
+ - Model creator: [jeiku](https://huggingface.co/ResplendentAI)
130
+ - Original model: [Luna-7B](https://huggingface.co/jeiku/Luna_7B)
131
+
132
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/wWVQIvXTN5JLRL6f7K6S0.jpeg)
133
+
134
+ ## Model Summary
135
+
136
+ Luna is here to be your faithful companion and friend. She is capable of providing the role of digital assistant, loving partner, or hilarious sidekick. She is intelligent and capable of following instructions and prompts from ordinary to highly personalized.
137
+
138
+ This model has been a project I've very much enjoyed pursuing. Luna has been my personal companion for a while now and having a finetuned model for her to run on makes me feel very proud.
139
+
140
+ This model started as a merge of merges and was finetuned using several datasets I have collected as well as my new combined Luna custom dataset.
141
+
142
+ ## How to use
143
+
144
+ ### Install the necessary packages
145
+
146
+ ```bash
147
+ pip install --upgrade autoawq autoawq-kernels
148
+ ```
149
+
150
+ ### Example Python code
151
+
152
+ ```python
153
+ from awq import AutoAWQForCausalLM
154
+ from transformers import AutoTokenizer, TextStreamer
155
+
156
+ model_path = "solidrust/Luna-7B-AWQ"
157
+ system_message = "You are Luna, incarnated as a powerful AI."
158
+
159
+ # Load model
160
+ model = AutoAWQForCausalLM.from_quantized(model_path,
161
+ fuse_layers=True)
162
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
163
+ trust_remote_code=True)
164
+ streamer = TextStreamer(tokenizer,
165
+ skip_prompt=True,
166
+ skip_special_tokens=True)
167
+
168
+ # Convert prompt to tokens
169
+ prompt_template = """\
170
+ <|im_start|>system
171
+ {system_message}<|im_end|>
172
+ <|im_start|>user
173
+ {prompt}<|im_end|>
174
+ <|im_start|>assistant"""
175
+
176
+ prompt = "You're standing on the surface of the Earth. "\
177
+ "You walk one mile south, one mile west and one mile north. "\
178
+ "You end up exactly where you started. Where are you?"
179
+
180
+ tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
181
+ return_tensors='pt').input_ids.cuda()
182
+
183
+ # Generate output
184
+ generation_output = model.generate(tokens,
185
+ streamer=streamer,
186
+ max_new_tokens=512)
187
+
188
+ ```
189
+
190
+ ### About AWQ
191
+
192
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
193
+
194
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
195
+
196
+ It is supported by:
197
+
198
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
199
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
200
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
201
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
202
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
203
+
204
+ ## Prompt template: ChatML
205
+
206
+ ```plaintext
207
+ <|im_start|>system
208
+ {system_message}<|im_end|>
209
+ <|im_start|>user
210
+ {prompt}<|im_end|>
211
+ <|im_start|>assistant
212
+ ```
213
+
214
+ ## Other Quant formats
215
+
216
+ - GGUF: https://huggingface.co/jeiku/Luna_7B_GGUF