Lin-K76 commited on
Commit
b847d03
1 Parent(s): 30561ca

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +958 -0
README.md ADDED
@@ -0,0 +1,958 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - fp8
4
+ - vllm
5
+ ---
6
+
7
+ # Phi-3-mini-128k-instruct-FP8
8
+
9
+ ## Model Overview
10
+ * <h3 style="display: inline;">Model Architecture:</h3> Based on and identical to the Phi-3-mini-128k-instruct-FP8 architecture
11
+ * <h3 style="display: inline;">Model Optimizations:</h3> Weights and activations quantized to FP8
12
+ * <h3 style="display: inline;">Release Date:</h3> June 29, 2024
13
+ * <h3 style="display: inline;">Model Developers:</h3> Neural Magic
14
+
15
+ Phi-3-mini-128k-instruct-FP8 quantized to FP8 weights and activations using per-tensor quantization through the [AutoFP8 repository](https://github.com/neuralmagic/AutoFP8), ready for inference with vLLM >= 0.5.0.
16
+ Calibrated with 10 repeats of each token in the tokenizer in random order to achieve 99% performance recovery on the Open LLM Benchmark evaluations.
17
+ Reduces space on disk by ~50%.
18
+ Part of the [FP8 LLMs for vLLM collection](https://huggingface.co/collections/neuralmagic/fp8-llms-for-vllm-666742ed2b78b7ac8df13127).
19
+
20
+
21
+ ## Usage and Creation
22
+ Produced using AutoFP8 with random tokens as calibration, based on [AutoFP8 with calibration samples from ultrachat](https://github.com/neuralmagic/AutoFP8/blob/147fa4d9e1a90ef8a93f96fc7d9c33056ddc017a/example_dataset.py).
23
+
24
+ ```python
25
+ from datasets import load_dataset
26
+ from transformers import AutoTokenizer
27
+ import numpy as np
28
+ import torch
29
+
30
+ from auto_fp8 import AutoFP8ForCausalLM, BaseQuantizeConfig
31
+
32
+ MODEL_DIR = "microsoft/Phi-3-mini-128k-instruct"
33
+ final_model_dir = MODEL_DIR.split("/")[-1]
34
+
35
+ CONTEXT_LENGTH = 4096
36
+ NUM_SAMPLES = 512
37
+ NUM_REPEATS = 10
38
+
39
+ pretrained_model_dir = MODEL_DIR
40
+ tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True, model_max_length=CONTEXT_LENGTH)
41
+ tokenizer.pad_token = tokenizer.eos_token
42
+
43
+ tokenizer_num_tokens = len(list(tokenizer.get_vocab().values()))
44
+ total_token_samples = NUM_REPEATS * tokenizer_num_tokens
45
+ num_random_samp = -(-total_token_samples // CONTEXT_LENGTH)
46
+
47
+ input_ids = np.tile(np.arange(tokenizer_num_tokens), NUM_REPEATS + 1)[:num_random_samp * CONTEXT_LENGTH]
48
+ np.random.shuffle(input_ids)
49
+ input_ids = input_ids.reshape(num_random_samp, CONTEXT_LENGTH)
50
+ input_ids = torch.tensor(input_ids, dtype=torch.int64).to("cuda")
51
+
52
+ quantize_config = BaseQuantizeConfig(
53
+ quant_method="fp8",
54
+ activation_scheme="static",
55
+ )
56
+
57
+ examples = input_ids
58
+
59
+ model = AutoFP8ForCausalLM.from_pretrained(pretrained_model_dir, quantize_config=quantize_config)
60
+
61
+ model.quantize(examples)
62
+
63
+ quantized_model_dir = f"{final_model_dir}-FP8"
64
+ model.save_quantized(quantized_model_dir)
65
+ ```
66
+
67
+ Evaluated through a modified version of vLLM with the following script:
68
+
69
+ ```
70
+ #!/bin/bash
71
+
72
+ # Example usage:
73
+ # CUDA_VISIBLE_DEVICES=0 ./eval_openllm.sh "neuralmagic/Llama-2-7b-chat-hf-FP8" "tensor_parallel_size=1,max_model_len=4096,add_bos_token=True,gpu_memory_utilization=0.7"
74
+
75
+ export MODEL_DIR=${1}
76
+ export MODEL_ARGS=${2}
77
+
78
+ declare -A tasks_fewshot=(
79
+ ["arc_challenge"]=25
80
+ ["winogrande"]=5
81
+ ["truthfulqa_mc2"]=0
82
+ ["hellaswag"]=10
83
+ ["mmlu"]=5
84
+ ["gsm8k"]=5
85
+ )
86
+
87
+ declare -A batch_sizes=(
88
+ ["arc_challenge"]="auto"
89
+ ["winogrande"]="auto"
90
+ ["truthfulqa_mc2"]="auto"
91
+ ["hellaswag"]="auto"
92
+ ["mmlu"]=1
93
+ ["gsm8k"]="auto"
94
+ )
95
+
96
+ for TASK in "${!tasks_fewshot[@]}"; do
97
+ NUM_FEWSHOT=${tasks_fewshot[$TASK]}
98
+ BATCH_SIZE=${batch_sizes[$TASK]}
99
+ lm_eval --model vllm \
100
+ --model_args pretrained=$MODEL_DIR,$MODEL_ARGS \
101
+ --tasks ${TASK} \
102
+ --num_fewshot ${NUM_FEWSHOT} \
103
+ --write_out \
104
+ --show_config \
105
+ --device cuda \
106
+ --batch_size ${BATCH_SIZE} \
107
+ --output_path="results/${TASK}"
108
+ done
109
+ ```
110
+
111
+ In vllm=0.5.0, Phi-3 models are not fully supported, and running the above script will yield an AssertionError. However, replacing the file that throws an error with the file below will fix the issue.
112
+
113
+
114
+ ```
115
+ from abc import abstractmethod
116
+ from typing import Dict, List, Optional, Tuple
117
+
118
+ import torch
119
+ import torch.nn.functional as F
120
+ from torch.nn.parameter import Parameter
121
+
122
+ from vllm.distributed import (divide, get_tensor_model_parallel_rank,
123
+ get_tensor_model_parallel_world_size,
124
+ split_tensor_along_last_dim,
125
+ tensor_model_parallel_all_gather,
126
+ tensor_model_parallel_all_reduce)
127
+ from vllm.logger import init_logger
128
+ from vllm.model_executor.layers.quantization.base_config import (
129
+ QuantizationConfig, QuantizeMethodBase)
130
+ from vllm.model_executor.utils import set_weight_attrs
131
+
132
+ logger = init_logger(__name__)
133
+
134
+
135
+ def adjust_marlin_shard(param, shard_size, shard_offset):
136
+ marlin_tile_size = getattr(param, "marlin_tile_size", None)
137
+ if marlin_tile_size is None:
138
+ return shard_size, shard_offset
139
+
140
+ return shard_size * marlin_tile_size, shard_offset * marlin_tile_size
141
+
142
+
143
+ def adjust_bitsandbytes_shard(param: Parameter,
144
+ qkv_offsets: Dict[str, Tuple[int, int]],
145
+ loaded_shard_id: str) -> Tuple[int, int]:
146
+ """Adjust the quantization offsets and sizes for BitsAndBytes sharding."""
147
+
148
+ total, _ = qkv_offsets["total"]
149
+ orig_offset, orig_size = qkv_offsets[loaded_shard_id]
150
+
151
+ quantized_total = param.data.shape[0]
152
+ quantized_offset = orig_offset * quantized_total // total
153
+ quantized_size = orig_size * quantized_total // total
154
+
155
+ return quantized_size, quantized_offset
156
+
157
+
158
+ class LinearMethodBase(QuantizeMethodBase):
159
+ """Base class for different (maybe quantized) linear methods."""
160
+
161
+ @abstractmethod
162
+ def create_weights(self, layer: torch.nn.Module,
163
+ input_size_per_partition: int,
164
+ output_partition_sizes: List[int], input_size: int,
165
+ output_size: int, params_dtype: torch.dtype,
166
+ **extra_weight_attrs):
167
+ """Create weights for a linear layer.
168
+ The weights will be set as attributes of the layer.
169
+
170
+ Args:
171
+ layer: The layer that is using the LinearMethodBase factory.
172
+ input_size_per_partition: Size of the weight input dim on rank X.
173
+ output_partition_sizes: Sizes of the output dim of each logical
174
+ weight on rank X. E.g., output_partition_sizes for QKVLinear
175
+ is a list contains the width of Wq, Wk, Wv on rank X.
176
+ input_size: Size of the input dim of the weight across all ranks.
177
+ output_size: Size of the output dim of the weight across all ranks.
178
+ params_dtype: Datatype of the parameters.
179
+ """
180
+ raise NotImplementedError
181
+
182
+ @abstractmethod
183
+ def apply(self,
184
+ layer: torch.nn.Module,
185
+ x: torch.Tensor,
186
+ bias: Optional[torch.Tensor] = None) -> torch.Tensor:
187
+ """Apply the weights in layer to the input tensor.
188
+ Expects create_weights to have been called before on the layer."""
189
+ raise NotImplementedError
190
+
191
+
192
+ class UnquantizedLinearMethod(LinearMethodBase):
193
+ """Linear method without quantization.
194
+
195
+ Args:
196
+ separate_bias_add: If true, add bias separately after matrix
197
+ multiplication.
198
+ """
199
+
200
+ def __init__(self, separate_bias_add: bool = False):
201
+ self.separate_bias_add = separate_bias_add
202
+
203
+ def create_weights(self, layer: torch.nn.Module,
204
+ input_size_per_partition: int,
205
+ output_partition_sizes: List[int], input_size: int,
206
+ output_size: int, params_dtype: torch.dtype,
207
+ **extra_weight_attrs):
208
+ weight = Parameter(torch.empty(sum(output_partition_sizes),
209
+ input_size_per_partition,
210
+ dtype=params_dtype),
211
+ requires_grad=False)
212
+ set_weight_attrs(weight, {"input_dim": 1, "output_dim": 0})
213
+ layer.register_parameter("weight", weight)
214
+ set_weight_attrs(weight, extra_weight_attrs)
215
+
216
+ def apply(self,
217
+ layer: torch.nn.Module,
218
+ x: torch.Tensor,
219
+ bias: Optional[torch.Tensor] = None) -> torch.Tensor:
220
+ weight = layer.weight
221
+ if self.separate_bias_add:
222
+ if bias is not None:
223
+ return F.linear(x, weight) + bias
224
+ return F.linear(x, weight)
225
+ return F.linear(x, weight, bias)
226
+
227
+
228
+ class LinearBase(torch.nn.Module):
229
+ """Base linear layer.
230
+
231
+ Args:
232
+ input_size: input dimension of the linear layer.
233
+ output_size: output dimension of the linear layer.
234
+ bias: If true, add bias.
235
+ skip_bias_add: If true, skip adding bias but instead return it.
236
+ params_dtype: Data type for the parameters.
237
+ quant_config: Quantization configure.
238
+ """
239
+
240
+ def __init__(
241
+ self,
242
+ input_size: int,
243
+ output_size: int,
244
+ skip_bias_add: bool = False,
245
+ params_dtype: Optional[torch.dtype] = None,
246
+ quant_config: Optional[QuantizationConfig] = None,
247
+ ):
248
+ super().__init__()
249
+
250
+ # Keep input parameters
251
+ self.input_size = input_size
252
+ self.output_size = output_size
253
+ self.skip_bias_add = skip_bias_add
254
+ if params_dtype is None:
255
+ params_dtype = torch.get_default_dtype()
256
+ self.params_dtype = params_dtype
257
+ if quant_config is None:
258
+ self.quant_method: Optional[
259
+ QuantizeMethodBase] = UnquantizedLinearMethod()
260
+ else:
261
+ self.quant_method = quant_config.get_quant_method(self)
262
+
263
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
264
+ raise NotImplementedError
265
+
266
+
267
+ class ReplicatedLinear(LinearBase):
268
+ """Replicated linear layer.
269
+
270
+ Args:
271
+ input_size: input dimension of the linear layer.
272
+ output_size: output dimension of the linear layer.
273
+ bias: If true, add bias.
274
+ skip_bias_add: If true, skip adding bias but instead return it.
275
+ params_dtype: Data type for the parameters.
276
+ quant_config: Quantization configure.
277
+ """
278
+
279
+ def __init__(self,
280
+ input_size: int,
281
+ output_size: int,
282
+ bias: bool = True,
283
+ skip_bias_add: bool = False,
284
+ params_dtype: Optional[torch.dtype] = None,
285
+ quant_config: Optional[QuantizationConfig] = None):
286
+ super().__init__(input_size, output_size, skip_bias_add, params_dtype,
287
+ quant_config)
288
+
289
+ # All the linear layer supports quant method.
290
+ assert self.quant_method is not None
291
+ self.quant_method.create_weights(self, self.input_size,
292
+ [self.output_size], self.input_size,
293
+ self.output_size, self.params_dtype)
294
+
295
+ if bias:
296
+ self.bias = Parameter(
297
+ torch.empty(self.output_size, dtype=self.params_dtype))
298
+ set_weight_attrs(self.bias, {"output_dim": 0})
299
+ else:
300
+ self.register_parameter("bias", None)
301
+
302
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
303
+ bias = self.bias if not self.skip_bias_add else None
304
+ assert self.quant_method is not None
305
+ output = self.quant_method.apply(self, x, bias)
306
+ output_bias = self.bias if self.skip_bias_add else None
307
+ return output, output_bias
308
+
309
+ def extra_repr(self) -> str:
310
+ s = f"in_features={self.input_size}"
311
+ s += f", output_features={self.output_size}"
312
+ s += f", bias={self.bias is not None}"
313
+ return s
314
+
315
+
316
+ class ColumnParallelLinear(LinearBase):
317
+ """Linear layer with column parallelism.
318
+
319
+ The linear layer is defined as Y = XA + b. A is parallelized along
320
+ its second dimension as A = [A_1, ..., A_p].
321
+
322
+ Args:
323
+ input_size: first dimension of matrix A.
324
+ output_size: second dimension of matrix A.
325
+ bias: If true, add bias.
326
+ gather_output: If true, call all-gather on output and make Y available
327
+ to all GPUs, otherwise, every GPU will have its output
328
+ which is Y_i = XA_i
329
+ skip_bias_add: This was added to enable performance optimizations where
330
+ bias can be fused with other element-wise operations. we
331
+ skip adding bias but instead return it.
332
+ params_dtype: Data type for the parameters.
333
+ quant_config: Quantization configure.
334
+ output_sizes: list of output sizes packed into one output, like for QKV
335
+ the list would be size 3.
336
+ """
337
+
338
+ def __init__(self,
339
+ input_size: int,
340
+ output_size: int,
341
+ bias: bool = True,
342
+ gather_output: bool = False,
343
+ skip_bias_add: bool = False,
344
+ params_dtype: Optional[torch.dtype] = None,
345
+ quant_config: Optional[QuantizationConfig] = None,
346
+ output_sizes: Optional[List[int]] = None):
347
+ super().__init__(input_size, output_size, skip_bias_add, params_dtype,
348
+ quant_config)
349
+
350
+ self.gather_output = gather_output
351
+
352
+ # Divide the weight matrix along the last dimension.
353
+ tp_size = get_tensor_model_parallel_world_size()
354
+ assert self.quant_method is not None
355
+ self.output_size_per_partition = divide(self.output_size, tp_size)
356
+ self.output_partition_sizes = [self.output_size_per_partition]
357
+ # If QKV or MergedColumn, use output size of each partition.
358
+ if hasattr(self, "output_sizes"):
359
+ self.output_partition_sizes = [
360
+ divide(output_size, tp_size)
361
+ for output_size in self.output_sizes
362
+ ]
363
+
364
+ if output_sizes is None:
365
+ output_sizes = [output_size]
366
+ self.quant_method.create_weights(
367
+ layer=self,
368
+ input_size_per_partition=self.input_size,
369
+ output_partition_sizes=self.output_partition_sizes,
370
+ input_size=self.input_size,
371
+ output_size=self.output_size,
372
+ params_dtype=self.params_dtype,
373
+ weight_loader=self.weight_loader)
374
+ if bias:
375
+ self.bias = Parameter(
376
+ torch.empty(self.output_size_per_partition,
377
+ dtype=params_dtype))
378
+ set_weight_attrs(self.bias, {
379
+ "output_dim": 0,
380
+ "weight_loader": self.weight_loader,
381
+ })
382
+ else:
383
+ self.register_parameter("bias", None)
384
+
385
+ def weight_loader(self, param: Parameter, loaded_weight: torch.Tensor):
386
+ # Special case for Fp8 scales.
387
+ fp8_scales_shard_indexer = getattr(param, "fp8_scales_shard_indexer",
388
+ None)
389
+
390
+ tp_rank = get_tensor_model_parallel_rank()
391
+ output_dim = getattr(param, "output_dim", None)
392
+ param_data = param.data
393
+ if output_dim is not None:
394
+ shard_size = param_data.shape[output_dim]
395
+ start_idx = tp_rank * shard_size
396
+ loaded_weight = loaded_weight.narrow(output_dim, start_idx,
397
+ shard_size)
398
+ # Special case for Fp8 scales.
399
+ elif fp8_scales_shard_indexer is not None:
400
+ param_data, loaded_weight = fp8_scales_shard_indexer(param_data,
401
+ loaded_weight,
402
+ shard_id=0)
403
+
404
+ assert param_data.shape == loaded_weight.shape
405
+ param_data.copy_(loaded_weight)
406
+
407
+ def forward(self, input_):
408
+ bias = self.bias if not self.skip_bias_add else None
409
+
410
+ # Matrix multiply.
411
+ assert self.quant_method is not None
412
+ output_parallel = self.quant_method.apply(self, input_, bias)
413
+ if self.gather_output:
414
+ # All-gather across the partitions.
415
+ output = tensor_model_parallel_all_gather(output_parallel)
416
+ else:
417
+ output = output_parallel
418
+ output_bias = self.bias if self.skip_bias_add else None
419
+ return output, output_bias
420
+
421
+ def extra_repr(self) -> str:
422
+ s = f"in_features={self.input_size}"
423
+ s += f", output_features={self.output_size_per_partition}"
424
+ s += f", bias={self.bias is not None}"
425
+ s += f", tp_size={get_tensor_model_parallel_world_size()}"
426
+ s += f", gather_output={self.gather_output}"
427
+ return s
428
+
429
+
430
+ class MergedColumnParallelLinear(ColumnParallelLinear):
431
+ """Packed linear layers with column parallelism.
432
+
433
+ Similar to ColumnParallelLinear, but the weight matrix is concatenated
434
+ along the output dimension. When the weight matrix is loaded, the
435
+ different partitions are sharded separately.
436
+
437
+ Args:
438
+ input_size: input dimension of the linear layer.
439
+ output_sizes: list of output dimensions of the linear layer.
440
+ bias: If true, add bias.
441
+ gather_output: If true, call all-gather on output and make the output
442
+ available to all GPUs, otherwise, every GPU will have
443
+ its own output.
444
+ skip_bias_add: This was added to enable performance optimizations where
445
+ bias can be fused with other element-wise operations. we
446
+ skip adding bias but instead return it.
447
+ params_dtype: Data type for the parameters.
448
+ quant_config: Quantization configure.
449
+ """
450
+
451
+ def __init__(self,
452
+ input_size: int,
453
+ output_sizes: List[int],
454
+ bias: bool = True,
455
+ gather_output: bool = False,
456
+ skip_bias_add: bool = False,
457
+ params_dtype: Optional[torch.dtype] = None,
458
+ quant_config: Optional[QuantizationConfig] = None):
459
+ self.output_sizes = output_sizes
460
+ tp_size = get_tensor_model_parallel_world_size()
461
+ assert all(output_size % tp_size == 0 for output_size in output_sizes)
462
+ super().__init__(input_size=input_size,
463
+ output_size=sum(output_sizes),
464
+ bias=bias,
465
+ gather_output=gather_output,
466
+ skip_bias_add=skip_bias_add,
467
+ params_dtype=params_dtype,
468
+ quant_config=quant_config)
469
+
470
+ def weight_loader(self,
471
+ param: Parameter,
472
+ loaded_weight: torch.Tensor,
473
+ loaded_shard_id: Optional[int] = None):
474
+
475
+ param_data = param.data
476
+ output_dim = getattr(param, "output_dim", None)
477
+ # Special case for AQLM codebooks.
478
+ is_metadata = getattr(param, "is_metadata", False)
479
+
480
+ param_shard_splitter = getattr(param, "shard_splitter", None)
481
+
482
+ if output_dim is not None and param_shard_splitter is not None:
483
+ raise NotImplementedError(
484
+ "We do not currently support output_dim != None and "
485
+ "shard_splitter != None for a parameter. Please open an issue."
486
+ )
487
+ # If a parameter has defined a shard_splitter to be used for
488
+ # the weight, it should be applied before the weight is
489
+ # loaded/copied to the parameter. The shard_splitter applies
490
+ # logic by using the loaded_shard_id to ensure that the loaded
491
+ # param is loaded to the correct location
492
+ # within the parameter defined by the linear method.
493
+ if loaded_shard_id is None and param_shard_splitter is not None:
494
+ raise NotImplementedError(
495
+ "We do not currently support loaded_shard_id == None and "
496
+ "shard_splitter != None for a parameter. Please open an issue."
497
+ )
498
+
499
+ # Special case for Fp8 scales.
500
+ fp8_scales_shard_indexer = getattr(param, "fp8_scales_shard_indexer",
501
+ None)
502
+
503
+ if loaded_shard_id is None:
504
+ # Loaded weight is already packed.
505
+ if output_dim is None:
506
+ temp = loaded_weight.repeat(param_data.shape)
507
+ assert param_data.shape == temp.shape
508
+ param_data.copy_(temp)
509
+ return
510
+ current_shard_offset = 0
511
+ shard_offsets = []
512
+ for i, output_size in enumerate(self.output_sizes):
513
+ shard_offsets.append((i, current_shard_offset, output_size))
514
+ current_shard_offset += output_size
515
+ packed_dim = getattr(param, "packed_dim", None)
516
+ for shard_id, shard_offset, shard_size in shard_offsets:
517
+ # Special case for Quantization.
518
+ # If quantized, we need to adjust the offset and size to account
519
+ # for the packing.
520
+ if packed_dim == output_dim:
521
+ shard_size = shard_size // param.pack_factor
522
+ shard_offset = shard_offset // param.pack_factor
523
+ # Special case for Marlin.
524
+ shard_size, shard_offset = adjust_marlin_shard(
525
+ param, shard_size, shard_offset)
526
+
527
+ loaded_weight_shard = loaded_weight.narrow(
528
+ output_dim, shard_offset, shard_size)
529
+ self.weight_loader(param, loaded_weight_shard, shard_id)
530
+ return
531
+
532
+ assert loaded_shard_id < len(self.output_sizes)
533
+ tp_rank = get_tensor_model_parallel_rank()
534
+ tp_size = get_tensor_model_parallel_world_size()
535
+ if output_dim is not None:
536
+ shard_offset = sum(self.output_sizes[:loaded_shard_id]) // tp_size
537
+ shard_size = self.output_sizes[loaded_shard_id] // tp_size
538
+ # Special case for quantization.
539
+ # If quantized, we need to adjust the offset and size to account
540
+ # for the packing.
541
+ packed_dim = getattr(param, "packed_dim", None)
542
+ if packed_dim == output_dim:
543
+ shard_size = shard_size // param.pack_factor
544
+ shard_offset = shard_offset // param.pack_factor
545
+ # Special case for Marlin.
546
+ shard_size, shard_offset = adjust_marlin_shard(
547
+ param, shard_size, shard_offset)
548
+
549
+ use_bitsandbytes = getattr(param, "use_bitsandbytes", False)
550
+ if use_bitsandbytes:
551
+ shard_size = loaded_weight.shape[output_dim]
552
+ shard_offset = loaded_weight.shape[output_dim] * \
553
+ loaded_shard_id
554
+
555
+ param_data = param_data.narrow(output_dim, shard_offset,
556
+ shard_size)
557
+ start_idx = tp_rank * shard_size
558
+ loaded_weight = loaded_weight.narrow(output_dim, start_idx,
559
+ shard_size)
560
+ # Special case for AQLM codebooks.
561
+ elif is_metadata:
562
+ # metadata indicates fixed size concatenated along dim 0
563
+ shard_size = loaded_weight.shape[0]
564
+ shard_offset = loaded_shard_id * shard_size
565
+ param_data = param_data.narrow(0, shard_offset, shard_size)
566
+
567
+ # If a param_shard_splitter is defined by the LinearMethod, use it.
568
+ elif param_shard_splitter is not None:
569
+ logical_widths = getattr(param, "logical_widths", None)
570
+ param_data, loaded_weight = param_shard_splitter(
571
+ param_data, loaded_weight, loaded_shard_id, logical_widths)
572
+
573
+ # Special case for Fp8 scales.
574
+ elif fp8_scales_shard_indexer is not None:
575
+ param_data, loaded_weight = fp8_scales_shard_indexer(
576
+ param_data, loaded_weight, loaded_shard_id)
577
+
578
+ else:
579
+ ignore_warning = getattr(param, "ignore_warning", False)
580
+ if not ignore_warning:
581
+ logger.warning(
582
+ "Loading a weight without `output_dim` attribute in "
583
+ "MergedColumnParallelLinear, assume the weight is "
584
+ "the same for all partitions.")
585
+
586
+ if fp8_scales_shard_indexer is None:
587
+ if len(param_data.shape) == 0:
588
+ param_data = param_data.reshape(1)
589
+
590
+ if len(loaded_weight.shape) == 0:
591
+ loaded_weight = loaded_weight.reshape(1)
592
+
593
+ assert param_data.shape == loaded_weight.shape
594
+ param_data.copy_(loaded_weight)
595
+
596
+
597
+ class QKVParallelLinear(ColumnParallelLinear):
598
+ """Linear layers for the attention's QKV transformation.
599
+
600
+ Linear layers for the linear transformation of the query, key, and value
601
+ vectors in the attention layer. The weight matrix is concatenated along
602
+ the output dimension. The layer is parallelized along the head dimension.
603
+ When the number of key/value heads is smaller than the number of query
604
+ heads (e.g., multi-query/grouped-query attention), the key/value head may
605
+ be replicated while the query heads are partitioned.
606
+
607
+ Args:
608
+ hidden_size: input hidden state size of the transformer.
609
+ head_size: size of each attention head.
610
+ total_num_heads: total number of attention query heads.
611
+ total_num_kv_heads: total number of attention key/value heads. If
612
+ None, assume total_num_kv_heads = total_num_heads.
613
+ bias: If true, add bias.
614
+ skip_bias_add: This was added to enable performance optimizations where
615
+ bias can be fused with other element-wise operations. we
616
+ skip adding bias but instead return it.
617
+ params_dtype: Data type for the parameters.
618
+ quant_config: Quantization configure.
619
+ """
620
+
621
+ def __init__(self,
622
+ hidden_size: int,
623
+ head_size: int,
624
+ total_num_heads: int,
625
+ total_num_kv_heads: Optional[int] = None,
626
+ bias: bool = True,
627
+ skip_bias_add: bool = False,
628
+ params_dtype: Optional[torch.dtype] = None,
629
+ quant_config: Optional[QuantizationConfig] = None):
630
+ self.hidden_size = hidden_size
631
+ self.head_size = head_size
632
+ self.total_num_heads = total_num_heads
633
+ if total_num_kv_heads is None:
634
+ total_num_kv_heads = total_num_heads
635
+ self.total_num_kv_heads = total_num_kv_heads
636
+ # Divide the weight matrix along the last dimension.
637
+ tp_size = get_tensor_model_parallel_world_size()
638
+ self.num_heads = divide(self.total_num_heads, tp_size)
639
+ if tp_size >= self.total_num_kv_heads:
640
+ self.num_kv_heads = 1
641
+ self.num_kv_head_replicas = divide(tp_size,
642
+ self.total_num_kv_heads)
643
+ else:
644
+ self.num_kv_heads = divide(self.total_num_kv_heads, tp_size)
645
+ self.num_kv_head_replicas = 1
646
+ input_size = self.hidden_size
647
+ output_size = (self.num_heads +
648
+ 2 * self.num_kv_heads) * tp_size * self.head_size
649
+ self.output_sizes = [
650
+ self.num_heads * self.head_size * tp_size, # q_proj
651
+ self.num_kv_heads * self.head_size * tp_size, # k_proj
652
+ self.num_kv_heads * self.head_size * tp_size, # v_proj
653
+ ]
654
+
655
+ super().__init__(input_size=input_size,
656
+ output_size=output_size,
657
+ bias=bias,
658
+ gather_output=False,
659
+ skip_bias_add=skip_bias_add,
660
+ params_dtype=params_dtype,
661
+ quant_config=quant_config)
662
+
663
+ def weight_loader(self,
664
+ param: Parameter,
665
+ loaded_weight: torch.Tensor,
666
+ loaded_shard_id: Optional[str] = None):
667
+ param_data = param.data
668
+ output_dim = getattr(param, "output_dim", None)
669
+ # Special case for AQLM codebooks.
670
+ is_metadata = getattr(param, "is_metadata", False)
671
+
672
+ param_shard_splitter = getattr(param, "shard_splitter", None)
673
+
674
+ if output_dim is not None and param_shard_splitter is not None:
675
+ raise NotImplementedError(
676
+ "We do not currently support output_dim != None and "
677
+ "shard_splitter != None for a parameter. Please open an issue."
678
+ )
679
+ # If a parameter has defined a shard_splitter to be used for
680
+ # the weight, it should be applied before the weight is
681
+ # loaded/copied to the parameter. The shard_splitter applies
682
+ # logic by using the loaded_shard_id to ensure that the loaded
683
+ # param is loaded to the correct location
684
+ # within the parameter defined by the linear method.
685
+ if loaded_shard_id is None and param_shard_splitter is not None:
686
+ raise NotImplementedError(
687
+ "We do not currently support loaded_shard_id == None and "
688
+ "shard_splitter != None for a parameter. Please open an issue."
689
+ )
690
+
691
+ # Special case for Fp8 scales.
692
+ fp8_scales_shard_indexer = getattr(param, "fp8_scales_shard_indexer",
693
+ None)
694
+
695
+ if loaded_shard_id is None:
696
+ # Loaded weight is already packed.
697
+ if output_dim is None:
698
+ temp = loaded_weight.repeat(param_data.shape)
699
+ assert param_data.shape == temp.shape
700
+ param_data.copy_(temp)
701
+ return
702
+ shard_offsets = [
703
+ # (shard_id, shard_offset, shard_size)
704
+ ("q", 0, self.total_num_heads * self.head_size),
705
+ ("k", self.total_num_heads * self.head_size,
706
+ self.total_num_kv_heads * self.head_size),
707
+ ("v", (self.total_num_heads + self.total_num_kv_heads) *
708
+ self.head_size, self.total_num_kv_heads * self.head_size),
709
+ ]
710
+ packed_dim = getattr(param, "packed_dim", None)
711
+ for shard_id, shard_offset, shard_size in shard_offsets:
712
+ # Special case for Quantized Weights.
713
+ # If quantized, we need to adjust the offset and size to account
714
+ # for the packing.
715
+ if packed_dim == output_dim:
716
+ shard_size = shard_size // param.pack_factor
717
+ shard_offset = shard_offset // param.pack_factor
718
+
719
+ # Special case for Marlin.
720
+ shard_size, shard_offset = adjust_marlin_shard(
721
+ param, shard_size, shard_offset)
722
+
723
+ loaded_weight_shard = loaded_weight.narrow(
724
+ output_dim, shard_offset, shard_size)
725
+ self.weight_loader(param, loaded_weight_shard, shard_id)
726
+ return
727
+
728
+ tp_rank = get_tensor_model_parallel_rank()
729
+ assert loaded_shard_id in ["q", "k", "v"]
730
+
731
+ # If output dim is defined, use the default loading process.
732
+ if output_dim is not None:
733
+ if loaded_shard_id == "q":
734
+ shard_offset = 0
735
+ shard_size = self.num_heads * self.head_size
736
+ elif loaded_shard_id == "k":
737
+ shard_offset = self.num_heads * self.head_size
738
+ shard_size = self.num_kv_heads * self.head_size
739
+ elif loaded_shard_id == "v":
740
+ shard_offset = (self.num_heads +
741
+ self.num_kv_heads) * self.head_size
742
+ shard_size = self.num_kv_heads * self.head_size
743
+ # Special case for Quantized Weights.
744
+ # If quantized, we need to adjust the offset and size to account
745
+ # for the packing.
746
+ packed_dim = getattr(param, "packed_dim", None)
747
+ if packed_dim == output_dim:
748
+ shard_size = shard_size // param.pack_factor
749
+ shard_offset = shard_offset // param.pack_factor
750
+
751
+ # Special case for Marlin.
752
+ shard_size, shard_offset = adjust_marlin_shard(
753
+ param, shard_size, shard_offset)
754
+
755
+ use_bitsandbytes = getattr(param, "use_bitsandbytes", False)
756
+ if use_bitsandbytes:
757
+ orig_qkv_offsets = {
758
+ "q": (0, self.num_heads * self.head_size),
759
+ "k": (self.num_heads * self.head_size,
760
+ self.num_kv_heads * self.head_size),
761
+ "v":
762
+ ((self.num_heads + self.num_kv_heads) * self.head_size,
763
+ self.num_kv_heads * self.head_size),
764
+ "total":
765
+ ((self.num_heads + 2 * self.num_kv_heads) * self.head_size,
766
+ 0)
767
+ }
768
+ shard_size, shard_offset = adjust_bitsandbytes_shard(
769
+ param, orig_qkv_offsets, loaded_shard_id)
770
+
771
+ param_data = param_data.narrow(output_dim, shard_offset,
772
+ shard_size)
773
+ if loaded_shard_id == "q":
774
+ shard_id = tp_rank
775
+ else:
776
+ shard_id = tp_rank // self.num_kv_head_replicas
777
+ start_idx = shard_id * shard_size
778
+ loaded_weight = loaded_weight.narrow(output_dim, start_idx,
779
+ shard_size)
780
+ # Special case for for AQLM codebooks.
781
+ elif is_metadata:
782
+ # metadata indicates fixed size concatenated along dim 0
783
+ shard_size = loaded_weight.shape[0]
784
+ shard_index = ["q", "k", "v"].index(loaded_shard_id)
785
+ param_data = param_data.narrow(0, shard_index * shard_size,
786
+ shard_size)
787
+ # If a param_shard_splitter is defined by the LinearMethod, use it.
788
+ elif param_shard_splitter is not None:
789
+ logical_widths = getattr(param, "logical_widths", None)
790
+ param_data, loaded_weight = param_shard_splitter(
791
+ param_data, loaded_weight, loaded_shard_id, logical_widths)
792
+
793
+ # Special case for Fp8 scales.
794
+ elif fp8_scales_shard_indexer is not None:
795
+ param_data, loaded_weight = fp8_scales_shard_indexer(
796
+ param_data, loaded_weight, loaded_shard_id)
797
+ else:
798
+ ignore_warning = getattr(param, "ignore_warning", False)
799
+ if not ignore_warning:
800
+ logger.warning(
801
+ "Loading a weight without `output_dim` attribute in "
802
+ "QKVParallelLinear, assume the weight is the same "
803
+ "for all partitions.")
804
+
805
+ if len(param_data.shape) == 0:
806
+ param_data = param_data.reshape(1)
807
+
808
+ if len(loaded_weight.shape) == 0:
809
+ loaded_weight = loaded_weight.reshape(1)
810
+
811
+ assert param_data.shape == loaded_weight.shape
812
+ param_data.copy_(loaded_weight)
813
+
814
+
815
+ class RowParallelLinear(LinearBase):
816
+ """Linear layer with row parallelism.
817
+
818
+ The linear layer is defined as Y = XA + b. A is parallelized along
819
+ its first dimension and X along its second dimension as:
820
+ - -
821
+ | A_1 |
822
+ | . |
823
+ A = | . | X = [X_1, ..., X_p]
824
+ | . |
825
+ | A_p |
826
+ - -
827
+ Arguments:
828
+ input_size: first dimension of matrix A.
829
+ output_size: second dimension of matrix A.
830
+ bias: If true, add bias. Note that bias is not parallelized.
831
+ input_is_parallel: If true, we assume that the input is already
832
+ split across the GPUs and we do not split
833
+ again.
834
+ skip_bias_add: This was added to enable performance optimization where
835
+ bias can be fused with other element-wise operations.
836
+ We skip adding bias but instead return it.
837
+ params_dtype: Data type for the parameters.
838
+ quant_config: Quantization configure.
839
+ """
840
+
841
+ def __init__(self,
842
+ input_size: int,
843
+ output_size: int,
844
+ bias: bool = True,
845
+ input_is_parallel: bool = True,
846
+ skip_bias_add: bool = False,
847
+ params_dtype: Optional[torch.dtype] = None,
848
+ reduce_results: bool = True,
849
+ quant_config: Optional[QuantizationConfig] = None):
850
+ super().__init__(input_size, output_size, skip_bias_add, params_dtype,
851
+ quant_config)
852
+
853
+ self.input_is_parallel = input_is_parallel
854
+ self.reduce_results = reduce_results
855
+
856
+ # Divide the weight matrix along the last dimension.
857
+ self.tp_size = get_tensor_model_parallel_world_size()
858
+ self.input_size_per_partition = divide(input_size, self.tp_size)
859
+ assert self.quant_method is not None
860
+ self.quant_method.create_weights(
861
+ layer=self,
862
+ input_size_per_partition=self.input_size_per_partition,
863
+ output_partition_sizes=[self.output_size],
864
+ input_size=self.input_size,
865
+ output_size=self.output_size,
866
+ params_dtype=self.params_dtype,
867
+ weight_loader=self.weight_loader)
868
+ if not reduce_results and (bias and not skip_bias_add):
869
+ raise ValueError("When not reduce the results, adding bias to the "
870
+ "results can lead to incorrect results")
871
+
872
+ if bias:
873
+ self.bias = Parameter(
874
+ torch.empty(self.output_size, dtype=params_dtype))
875
+ set_weight_attrs(self.bias, {
876
+ "output_dim": 0,
877
+ "weight_loader": self.weight_loader,
878
+ })
879
+ else:
880
+ self.register_parameter("bias", None)
881
+
882
+ def weight_loader(self, param: Parameter, loaded_weight: torch.Tensor):
883
+ # Special case for Fp8 scales.
884
+ fp8_scales_shard_indexer = getattr(param, "fp8_scales_shard_indexer",
885
+ None)
886
+
887
+ tp_rank = get_tensor_model_parallel_rank()
888
+ input_dim = getattr(param, "input_dim", None)
889
+ param_data = param.data
890
+ if input_dim is not None:
891
+ shard_size = param_data.shape[input_dim]
892
+ start_idx = tp_rank * shard_size
893
+ loaded_weight = loaded_weight.narrow(input_dim, start_idx,
894
+ shard_size)
895
+
896
+ # Special case for Fp8 scales.
897
+ elif fp8_scales_shard_indexer is not None:
898
+ param_data, loaded_weight = fp8_scales_shard_indexer(param_data,
899
+ loaded_weight,
900
+ shard_id=0)
901
+
902
+ if fp8_scales_shard_indexer is None and len(loaded_weight.shape) == 0:
903
+ loaded_weight = loaded_weight.reshape(1)
904
+
905
+ assert param_data.shape == loaded_weight.shape
906
+ param_data.copy_(loaded_weight)
907
+
908
+ def forward(self, input_):
909
+ # Set up backprop all-reduce.
910
+ if self.input_is_parallel:
911
+ input_parallel = input_
912
+ else:
913
+ tp_rank = get_tensor_model_parallel_rank()
914
+ splitted_input = split_tensor_along_last_dim(
915
+ input_, num_partitions=self.tp_size)
916
+ input_parallel = splitted_input[tp_rank].contiguous()
917
+
918
+ # Matrix multiply.
919
+ assert self.quant_method is not None
920
+ output_parallel = self.quant_method.apply(self, input_parallel)
921
+ if self.reduce_results and self.tp_size > 1:
922
+ output_ = tensor_model_parallel_all_reduce(output_parallel)
923
+ else:
924
+ output_ = output_parallel
925
+
926
+ if not self.skip_bias_add:
927
+ output = output_ + self.bias if self.bias is not None else output_
928
+ output_bias = None
929
+ else:
930
+ output = output_
931
+ output_bias = self.bias
932
+ return output, output_bias
933
+
934
+ def extra_repr(self) -> str:
935
+ s = f"input_features={self.input_size_per_partition}"
936
+ s += f", output_features={self.output_size}"
937
+ s += f", bias={self.bias is not None}"
938
+ s += f", tp_size={self.tp_size}"
939
+ s += f", reduce_results={self.reduce_results}"
940
+ return s
941
+ ```
942
+
943
+
944
+ ## Evaluation
945
+
946
+ Evaluated on the Open LLM Leaderboard evaluations through vLLM.
947
+
948
+ ### Open LLM Leaderboard evaluation scores
949
+ | | Phi-3-mini-128k-instruct-FP8 | neuralmagic/Phi-3-mini-128k-instruct-FP8<br>(this model) |
950
+ | :------------------: | :----------------------: | :------------------------------------------------: |
951
+ | arc-c<br>25-shot | 63.65 | 63.31 |
952
+ | hellaswag<br>10-shot | 79.76 | 79.44 |
953
+ | mmlu<br>5-shot | 68.10 | 68.08 |
954
+ | truthfulqa<br>0-shot | 53.97 | 53.76 |
955
+ | winogrande<br>5-shot | 73.72 | 72.45 |
956
+ | gsm8k<br>5-shot | 75.59 | 72.86 |
957
+ | **Average<br>Accuracy** | **69.13** | **68.32** |
958
+ | **Recovery** | **100%** | **98.82%** |