kvaishnavi commited on
Commit
aaafbba
1 Parent(s): c153c49

Upload Phi-3-medium-128k-instruct ONNX models

Browse files
LICENSE ADDED
@@ -0,0 +1,223 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) Microsoft Corporation.
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE
22
+
23
+ Apache License
24
+ Version 2.0, January 2004
25
+ http://www.apache.org/licenses/
26
+
27
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
28
+
29
+ 1. Definitions.
30
+
31
+ "License" shall mean the terms and conditions for use, reproduction,
32
+ and distribution as defined by Sections 1 through 9 of this document.
33
+
34
+ "Licensor" shall mean the copyright owner or entity authorized by
35
+ the copyright owner that is granting the License.
36
+
37
+ "Legal Entity" shall mean the union of the acting entity and all
38
+ other entities that control, are controlled by, or are under common
39
+ control with that entity. For the purposes of this definition,
40
+ "control" means (i) the power, direct or indirect, to cause the
41
+ direction or management of such entity, whether by contract or
42
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
43
+ outstanding shares, or (iii) beneficial ownership of such entity.
44
+
45
+ "You" (or "Your") shall mean an individual or Legal Entity
46
+ exercising permissions granted by this License.
47
+
48
+ "Source" form shall mean the preferred form for making modifications,
49
+ including but not limited to software source code, documentation
50
+ source, and configuration files.
51
+
52
+ "Object" form shall mean any form resulting from mechanical
53
+ transformation or translation of a Source form, including but
54
+ not limited to compiled object code, generated documentation,
55
+ and conversions to other media types.
56
+
57
+ "Work" shall mean the work of authorship, whether in Source or
58
+ Object form, made available under the License, as indicated by a
59
+ copyright notice that is included in or attached to the work
60
+ (an example is provided in the Appendix below).
61
+
62
+ "Derivative Works" shall mean any work, whether in Source or Object
63
+ form, that is based on (or derived from) the Work and for which the
64
+ editorial revisions, annotations, elaborations, or other modifications
65
+ represent, as a whole, an original work of authorship. For the purposes
66
+ of this License, Derivative Works shall not include works that remain
67
+ separable from, or merely link (or bind by name) to the interfaces of,
68
+ the Work and Derivative Works thereof.
69
+
70
+ "Contribution" shall mean any work of authorship, including
71
+ the original version of the Work and any modifications or additions
72
+ to that Work or Derivative Works thereof, that is intentionally
73
+ submitted to Licensor for inclusion in the Work by the copyright owner
74
+ or by an individual or Legal Entity authorized to submit on behalf of
75
+ the copyright owner. For the purposes of this definition, "submitted"
76
+ means any form of electronic, verbal, or written communication sent
77
+ to the Licensor or its representatives, including but not limited to
78
+ communication on electronic mailing lists, source code control systems,
79
+ and issue tracking systems that are managed by, or on behalf of, the
80
+ Licensor for the purpose of discussing and improving the Work, but
81
+ excluding communication that is conspicuously marked or otherwise
82
+ designated in writing by the copyright owner as "Not a Contribution."
83
+
84
+ "Contributor" shall mean Licensor and any individual or Legal Entity
85
+ on behalf of whom a Contribution has been received by Licensor and
86
+ subsequently incorporated within the Work.
87
+
88
+ 2. Grant of Copyright License. Subject to the terms and conditions of
89
+ this License, each Contributor hereby grants to You a perpetual,
90
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
91
+ copyright license to reproduce, prepare Derivative Works of,
92
+ publicly display, publicly perform, sublicense, and distribute the
93
+ Work and such Derivative Works in Source or Object form.
94
+
95
+ 3. Grant of Patent License. Subject to the terms and conditions of
96
+ this License, each Contributor hereby grants to You a perpetual,
97
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
98
+ (except as stated in this section) patent license to make, have made,
99
+ use, offer to sell, sell, import, and otherwise transfer the Work,
100
+ where such license applies only to those patent claims licensable
101
+ by such Contributor that are necessarily infringed by their
102
+ Contribution(s) alone or by combination of their Contribution(s)
103
+ with the Work to which such Contribution(s) was submitted. If You
104
+ institute patent litigation against any entity (including a
105
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
106
+ or a Contribution incorporated within the Work constitutes direct
107
+ or contributory patent infringement, then any patent licenses
108
+ granted to You under this License for that Work shall terminate
109
+ as of the date such litigation is filed.
110
+
111
+ 4. Redistribution. You may reproduce and distribute copies of the
112
+ Work or Derivative Works thereof in any medium, with or without
113
+ modifications, and in Source or Object form, provided that You
114
+ meet the following conditions:
115
+
116
+ (a) You must give any other recipients of the Work or
117
+ Derivative Works a copy of this License; and
118
+
119
+ (b) You must cause any modified files to carry prominent notices
120
+ stating that You changed the files; and
121
+
122
+ (c) You must retain, in the Source form of any Derivative Works
123
+ that You distribute, all copyright, patent, trademark, and
124
+ attribution notices from the Source form of the Work,
125
+ excluding those notices that do not pertain to any part of
126
+ the Derivative Works; and
127
+
128
+ (d) If the Work includes a "NOTICE" text file as part of its
129
+ distribution, then any Derivative Works that You distribute must
130
+ include a readable copy of the attribution notices contained
131
+ within such NOTICE file, excluding those notices that do not
132
+ pertain to any part of the Derivative Works, in at least one
133
+ of the following places: within a NOTICE text file distributed
134
+ as part of the Derivative Works; within the Source form or
135
+ documentation, if provided along with the Derivative Works; or,
136
+ within a display generated by the Derivative Works, if and
137
+ wherever such third-party notices normally appear. The contents
138
+ of the NOTICE file are for informational purposes only and
139
+ do not modify the License. You may add Your own attribution
140
+ notices within Derivative Works that You distribute, alongside
141
+ or as an addendum to the NOTICE text from the Work, provided
142
+ that such additional attribution notices cannot be construed
143
+ as modifying the License.
144
+
145
+ You may add Your own copyright statement to Your modifications and
146
+ may provide additional or different license terms and conditions
147
+ for use, reproduction, or distribution of Your modifications, or
148
+ for any such Derivative Works as a whole, provided Your use,
149
+ reproduction, and distribution of the Work otherwise complies with
150
+ the conditions stated in this License.
151
+
152
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
153
+ any Contribution intentionally submitted for inclusion in the Work
154
+ by You to the Licensor shall be under the terms and conditions of
155
+ this License, without any additional terms or conditions.
156
+ Notwithstanding the above, nothing herein shall supersede or modify
157
+ the terms of any separate license agreement you may have executed
158
+ with Licensor regarding such Contributions.
159
+
160
+ 6. Trademarks. This License does not grant permission to use the trade
161
+ names, trademarks, service marks, or product names of the Licensor,
162
+ except as required for reasonable and customary use in describing the
163
+ origin of the Work and reproducing the content of the NOTICE file.
164
+
165
+ 7. Disclaimer of Warranty. Unless required by applicable law or
166
+ agreed to in writing, Licensor provides the Work (and each
167
+ Contributor provides its Contributions) on an "AS IS" BASIS,
168
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
169
+ implied, including, without limitation, any warranties or conditions
170
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
171
+ PARTICULAR PURPOSE. You are solely responsible for determining the
172
+ appropriateness of using or redistributing the Work and assume any
173
+ risks associated with Your exercise of permissions under this License.
174
+
175
+ 8. Limitation of Liability. In no event and under no legal theory,
176
+ whether in tort (including negligence), contract, or otherwise,
177
+ unless required by applicable law (such as deliberate and grossly
178
+ negligent acts) or agreed to in writing, shall any Contributor be
179
+ liable to You for damages, including any direct, indirect, special,
180
+ incidental, or consequential damages of any character arising as a
181
+ result of this License or out of the use or inability to use the
182
+ Work (including but not limited to damages for loss of goodwill,
183
+ work stoppage, computer failure or malfunction, or any and all
184
+ other commercial damages or losses), even if such Contributor
185
+ has been advised of the possibility of such damages.
186
+
187
+ 9. Accepting Warranty or Additional Liability. While redistributing
188
+ the Work or Derivative Works thereof, You may choose to offer,
189
+ and charge a fee for, acceptance of support, warranty, indemnity,
190
+ or other liability obligations and/or rights consistent with this
191
+ License. However, in accepting such obligations, You may act only
192
+ on Your own behalf and on Your sole responsibility, not on behalf
193
+ of any other Contributor, and only if You agree to indemnify,
194
+ defend, and hold each Contributor harmless for any liability
195
+ incurred by, or claims asserted against, such Contributor by reason
196
+ of your accepting any such warranty or additional liability.
197
+
198
+ END OF TERMS AND CONDITIONS
199
+
200
+ ============================================================================
201
+
202
+ Copyright 2016-2019 Intel Corporation
203
+ Copyright 2018 YANDEX LLC
204
+
205
+ Licensed under the Apache License, Version 2.0 (the "License");
206
+ you may not use this file except in compliance with the License.
207
+ You may obtain a copy of the License at
208
+
209
+ http://www.apache.org/licenses/LICENSE-2.0
210
+
211
+ Unless required by applicable law or agreed to in writing, software
212
+ distributed under the License is distributed on an "AS IS" BASIS,
213
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
214
+ See the License for the specific language governing permissions and
215
+ limitations under the License.
216
+
217
+ This distribution includes third party software ("third party programs").
218
+ This third party software, even if included with the distribution of
219
+ the Intel software, may be governed by separate license terms, including
220
+ without limitation, third party license terms, other Intel software license
221
+ terms, and open source software license terms. These separate license terms
222
+ govern your use of the third party programs as set forth in the
223
+ "THIRD-PARTY-PROGRAMS" file.
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - ONNX
6
+ - DML
7
+ - ONNXRuntime
8
+ - phi3
9
+ - nlp
10
+ - conversational
11
+ - custom_code
12
+ inference: false
13
+ ---
14
+
15
+ # Phi-3 Medium-128K-Instruct ONNX DirectML models
16
+
17
+ <!-- Provide a quick summary of what the model is/does. -->
18
+ This repository hosts the optimized versions of [Phi-3-medium-128k-instruct](https://aka.ms/phi3-medium-128K-instruct) to accelerate inference with DirectML and ONNX Runtime for your machines with GPUs.
19
+
20
+ Phi-3 Medium is a 14B parameter, lightweight, state-of-the-art open model trained with the Phi-3 datasets, which include both synthetic data and the filtered publicly available websites data, with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the medium version in two variants: [4K](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct), which are the context lengths (in tokens) that they can support.
21
+
22
+ The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context, and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up.
23
+
24
+ Optimized variants of the Phi-3 Medium models are published here in [ONNX](https://onnx.ai) format and run with [DirectML](https://learn.microsoft.com/en-us/windows/ai/directml/dml-intro). This lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
25
+
26
+ ## ONNX Models
27
+
28
+ Here are some of the optimized configurations we have added:
29
+
30
+ 1. ONNX model for INT4 DML: ONNX model optimized to run with DirectML and quantized to int4 precision using AWQ*.
31
+
32
+ How do you know which is the best ONNX model for you:
33
+ - Are you on a Windows machine with GPU?
34
+ - I don't know → Review this [guide](https://www.microsoft.com/en-us/windows/learning-center/how-to-check-gpu) to see whether you have a GPU in your Windows machine.
35
+ - Yes → Access the Hugging Face DirectML ONNX models and instructions at [Phi-3-medium-128k-instruct-onnx-directml](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-directml).
36
+ - No → Do you have a NVIDIA GPU?
37
+ - I don't know → Review this [guide](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#verify-you-have-a-cuda-capable-gpu) to see whether you have a CUDA-capable GPU.
38
+ - Yes → Access the Hugging Face CUDA ONNX models and instructions at [Phi-3-medium-128k-instruct-onnx-cuda](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda) for NVIDIA GPUs.
39
+ - No → Access the Hugging Face ONNX models for CPU devices and instructions at [Phi-3-medium-128k-instruct-onnx-cpu](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cpu)
40
+
41
+ ## How to Get Started with the Model
42
+ To support the Phi-3 models across a range of devices, platforms, and EP backends, we introduce a new API to wrap several aspects of generative AI inferencing. This API makes it easy to drag and drop LLMs straight into your app. To run the early version of these models with ONNX, follow the steps [here](http://aka.ms/generate-tutorial). You can also test this with a [chat app](https://github.com/microsoft/onnxruntime-genai/tree/main/examples/chat_app).
43
+
44
+ ## Hardware Supported
45
+
46
+ The model has been tested on:
47
+ - GPU SKU: RTX 4090 (DirectML)
48
+
49
+ Minimum Configuration Required:
50
+ - Windows: DirectX 12-capable GPU and a minimum of 10GB of combined RAM
51
+
52
+ ### Model Description
53
+
54
+ - **Developed by:** Microsoft
55
+ - **Model type:** ONNX
56
+ - **Language(s) (NLP):** Python, C, C++
57
+ - **License:** MIT
58
+ - **Model Description:** This is a conversion of the Phi-3 Medium-128K-Instruct model for ONNX Runtime inference.
59
+
60
+ ## Additional Details
61
+ - [**Phi-3 Small, Medium, and Vision Blog**](https://aka.ms/phi3_ONNXBuild24) and [**Phi-3 Mini Blog**](https://aka.ms/phi3-optimizations)
62
+ - [**Phi-3 Model Blog Link**](https://aka.ms/phi3blog-april)
63
+ - [**Phi-3 Model Card**]( https://aka.ms/phi3-medium-128K-instruct)
64
+ - [**Phi-3 Technical Report**](https://aka.ms/phi3-tech-report)
65
+ - [**Phi-3 on Azure AI Studio**](https://aka.ms/phi3-azure-ai)
66
+
67
+ ## Performance Metrics
68
+
69
+ ## DirectML
70
+ We measured the performance of DirectML and ONNX Runtime's new Generate() API with Phi-3 medium quantized with Activation-Aware Quantization [AWQ](https://arxiv.org/abs/2306.00978) and with a block size of 128 on Windows. Our test machine had an NVIDIA GeForce RTX 4090 GPU and an Intel Core i9-13900K CPU. DirectML lets developers not only achieve great performance but also lets developers deploy models across the entire Windows ecosystem with support from AMD, Intel, and NVIDIA. Best of all, AWQ means that developers get this scale while also maintaining high model accuracy.
71
+
72
+ Stay tuned for additional performance improvements in the coming weeks thanks to optimized drivers from our hardware partners, along with additional updates to the ONNX Runtime Generate() API.
73
+
74
+ | Batch Size, Prompt Length | Block Size = 32 | Block Size = 128 |
75
+ |---------------------------|-----------------|------------------|
76
+ | 1, 16 | 66.60 | 72.26 |
77
+
78
+
79
+ #### Package Versions
80
+
81
+ | Pip package name | Version |
82
+ |------------------|---------|
83
+ | torch | 2.2.0 |
84
+ | triton | 2.2.0 |
85
+ | onnxruntime-gpu | 1.18.0 |
86
+ | transformers | 4.39.0 |
87
+ | bitsandbytes | 0.42.0 |
88
+
89
+ ## Appendix
90
+
91
+ ### Activation Aware Quantization
92
+ AWQ works by identifying the top 1% most salient weights that are most important for maintaining accuracy and quantizing the remaining 99% of weights. This leads to less accuracy loss from quantization compared to many other quantization techniques. For more on AWQ see [here](https://arxiv.org/abs/2306.00978).
93
+
94
+ ## Model Card Contact
95
+ parinitarahi, kvaishnavi, natke
96
+
97
+ ## Contributors
98
+ Kunal Vaishnavi, Sunghoon Choi, Yufeng Li, Sheetal Arun Kadam, Natalie Kershaw, Parinita Rahi, Patrice Vignola, Xiang Zhang, Chai Chaoweeraprasit, Logan Iyer, Vicente Rivera, Jacques van Rhyn
config.json ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Phi-3-medium-128k-instruct",
3
+ "architectures": [
4
+ "Phi3ForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "auto_map": {
8
+ "AutoConfig": "configuration_phi3.Phi3Config",
9
+ "AutoModelForCausalLM": "modeling_phi3.Phi3ForCausalLM"
10
+ },
11
+ "bos_token_id": 1,
12
+ "embd_pdrop": 0.0,
13
+ "eos_token_id": 32000,
14
+ "hidden_act": "silu",
15
+ "hidden_size": 5120,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 17920,
18
+ "max_position_embeddings": 131072,
19
+ "model_type": "phi3",
20
+ "num_attention_heads": 40,
21
+ "num_hidden_layers": 40,
22
+ "num_key_value_heads": 10,
23
+ "original_max_position_embeddings": 4096,
24
+ "pad_token_id": null,
25
+ "resid_pdrop": 0.0,
26
+ "rms_norm_eps": 1e-05,
27
+ "rope_scaling": {
28
+ "long_factor": [
29
+ 1.0,
30
+ 1.0,
31
+ 1.0,
32
+ 1.0,
33
+ 1.0,
34
+ 1.0,
35
+ 1.0,
36
+ 1.0,
37
+ 1.0,
38
+ 1.0,
39
+ 1.0,
40
+ 1.0,
41
+ 1.0,
42
+ 1.25,
43
+ 1.25,
44
+ 1.5,
45
+ 2.0,
46
+ 2.75,
47
+ 5.75,
48
+ 5.75,
49
+ 6.5,
50
+ 9.25,
51
+ 11.0,
52
+ 13.25,
53
+ 19.25,
54
+ 19.75,
55
+ 19.75,
56
+ 21.25,
57
+ 21.5,
58
+ 26.5,
59
+ 30.0,
60
+ 33.75,
61
+ 35.25,
62
+ 38.5,
63
+ 42.0,
64
+ 42.25,
65
+ 46.0,
66
+ 47.0,
67
+ 50.0,
68
+ 50.5,
69
+ 51.0,
70
+ 52.0,
71
+ 52.75,
72
+ 53.75,
73
+ 54.75,
74
+ 57.0,
75
+ 57.25,
76
+ 58.5,
77
+ 59.25,
78
+ 59.5,
79
+ 62.0,
80
+ 62.5,
81
+ 62.75,
82
+ 63.25,
83
+ 63.25,
84
+ 63.25,
85
+ 63.75,
86
+ 64.0,
87
+ 64.0,
88
+ 64.25,
89
+ 64.5,
90
+ 64.5,
91
+ 65.0,
92
+ 65.0
93
+ ],
94
+ "short_factor": [
95
+ 1.0,
96
+ 1.0,
97
+ 1.0,
98
+ 1.0,
99
+ 1.0,
100
+ 1.0,
101
+ 1.01,
102
+ 1.02,
103
+ 1.02,
104
+ 1.04,
105
+ 1.04,
106
+ 1.07,
107
+ 1.07,
108
+ 1.1,
109
+ 1.3000000000000003,
110
+ 1.3000000000000003,
111
+ 1.5000000000000004,
112
+ 1.5700000000000005,
113
+ 1.9000000000000008,
114
+ 2.3100000000000014,
115
+ 2.759999999999992,
116
+ 3.3899999999999784,
117
+ 3.9399999999999666,
118
+ 4.009999999999965,
119
+ 4.289999999999959,
120
+ 4.349999999999958,
121
+ 5.349999999999937,
122
+ 6.659999999999909,
123
+ 7.029999999999901,
124
+ 7.51999999999989,
125
+ 8.00999999999988,
126
+ 8.249999999999876,
127
+ 8.279999999999875,
128
+ 9.629999999999846,
129
+ 9.89999999999984,
130
+ 10.589999999999826,
131
+ 11.049999999999816,
132
+ 11.7899999999998,
133
+ 12.189999999999792,
134
+ 12.889999999999777,
135
+ 13.129999999999772,
136
+ 13.16999999999977,
137
+ 13.20999999999977,
138
+ 13.479999999999764,
139
+ 13.539999999999763,
140
+ 13.779999999999758,
141
+ 13.929999999999755,
142
+ 14.429999999999744,
143
+ 14.759999999999737,
144
+ 15.149999999999729,
145
+ 15.419999999999723,
146
+ 15.53999999999972,
147
+ 15.659999999999718,
148
+ 15.749999999999716,
149
+ 15.759999999999716,
150
+ 15.799999999999715,
151
+ 16.05999999999971,
152
+ 16.079999999999714,
153
+ 16.11999999999972,
154
+ 16.11999999999972,
155
+ 16.18999999999973,
156
+ 16.31999999999975,
157
+ 16.539999999999786,
158
+ 16.799999999999827
159
+ ],
160
+ "type": "su"
161
+ },
162
+ "rope_theta": 10000.0,
163
+ "sliding_window": 131072,
164
+ "tie_word_embeddings": false,
165
+ "torch_dtype": "bfloat16",
166
+ "transformers_version": "4.39.3",
167
+ "use_cache": true,
168
+ "vocab_size": 32064
169
+ }
configuration_phi3.py ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 Microsoft and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """ Phi-3 model configuration"""
17
+
18
+
19
+ from transformers.configuration_utils import PretrainedConfig
20
+ from transformers.utils import logging
21
+
22
+
23
+ logger = logging.get_logger(__name__)
24
+
25
+ PHI3_PRETRAINED_CONFIG_ARCHIVE_MAP = {
26
+ "microsoft/Phi-3-mini-4k-instruct": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/config.json",
27
+ "microsoft/Phi-3-mini-128k-instruct": "https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/config.json",
28
+ }
29
+
30
+
31
+ class Phi3Config(PretrainedConfig):
32
+ r"""
33
+ This is the configuration class to store the configuration of a [`Phi3Model`]. It is used to instantiate a Phi-3
34
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
35
+ defaults will yield a similar configuration to that of the
36
+ [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
37
+
38
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
39
+ documentation from [`PretrainedConfig`] for more information.
40
+
41
+ Args:
42
+ vocab_size (`int`, *optional*, defaults to 32064):
43
+ Vocabulary size of the Phi-3 model. Defines the number of different tokens that can be represented by the
44
+ `inputs_ids` passed when calling [`Phi3Model`].
45
+ hidden_size (`int`, *optional*, defaults to 3072):
46
+ Dimension of the hidden representations.
47
+ intermediate_size (`int`, *optional*, defaults to 8192):
48
+ Dimension of the MLP representations.
49
+ num_hidden_layers (`int`, *optional*, defaults to 32):
50
+ Number of hidden layers in the Transformer decoder.
51
+ num_attention_heads (`int`, *optional*, defaults to 32):
52
+ Number of attention heads for each attention layer in the Transformer decoder.
53
+ num_key_value_heads (`int`, *optional*):
54
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
55
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
56
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
57
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
58
+ by meanpooling all the original heads within that group. For more details checkout [this
59
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
60
+ `num_attention_heads`.
61
+ resid_pdrop (`float`, *optional*, defaults to 0.0):
62
+ Dropout probability for mlp outputs.
63
+ embd_pdrop (`int`, *optional*, defaults to 0.0):
64
+ The dropout ratio for the embeddings.
65
+ attention_dropout (`float`, *optional*, defaults to 0.0):
66
+ The dropout ratio after computing the attention scores.
67
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
68
+ The non-linear activation function (function or string) in the decoder.
69
+ max_position_embeddings (`int`, *optional*, defaults to 4096):
70
+ The maximum sequence length that this model might ever be used with.
71
+ original_max_position_embeddings (`int`, *optional*, defaults to 4096):
72
+ The maximum sequence length that this model was trained with. This is used to determine the size of the
73
+ original RoPE embeddings when using long scaling.
74
+ initializer_range (`float`, *optional*, defaults to 0.02):
75
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
76
+ rms_norm_eps (`float`, *optional*, defaults to 1e-05):
77
+ The epsilon value used for the RMSNorm.
78
+ use_cache (`bool`, *optional*, defaults to `True`):
79
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
80
+ relevant if `config.is_decoder=True`. Whether to tie weight embeddings or not.
81
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
82
+ Whether to tie weight embeddings
83
+ rope_theta (`float`, *optional*, defaults to 10000.0):
84
+ The base period of the RoPE embeddings.
85
+ rope_scaling (`dict`, *optional*):
86
+ The scaling strategy for the RoPE embeddings. If `None`, no scaling is applied. If a dictionary, it must
87
+ contain the following keys: `type`, `short_factor` and `long_factor`. The `type` must be either `su` or `yarn` and
88
+ the `short_factor` and `long_factor` must be lists of numbers with the same length as the hidden size
89
+ divided by the number of attention heads divided by 2.
90
+ bos_token_id (`int`, *optional*, defaults to 1):
91
+ The id of the "beginning-of-sequence" token.
92
+ eos_token_id (`int`, *optional*, defaults to 32000):
93
+ The id of the "end-of-sequence" token.
94
+ pad_token_id (`int`, *optional*, defaults to 32000):
95
+ The id of the padding token.
96
+ sliding_window (`int`, *optional*):
97
+ Sliding window attention window size. If `None`, no sliding window is applied.
98
+
99
+ Example:
100
+
101
+ ```python
102
+ >>> from transformers import Phi3Model, Phi3Config
103
+
104
+ >>> # Initializing a Phi-3 style configuration
105
+ >>> configuration = Phi3Config.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
106
+
107
+ >>> # Initializing a model from the configuration
108
+ >>> model = Phi3Model(configuration)
109
+
110
+ >>> # Accessing the model configuration
111
+ >>> configuration = model.config
112
+ ```"""
113
+
114
+ model_type = "phi3"
115
+ keys_to_ignore_at_inference = ["past_key_values"]
116
+
117
+ def __init__(
118
+ self,
119
+ vocab_size=32064,
120
+ hidden_size=3072,
121
+ intermediate_size=8192,
122
+ num_hidden_layers=32,
123
+ num_attention_heads=32,
124
+ num_key_value_heads=None,
125
+ resid_pdrop=0.0,
126
+ embd_pdrop=0.0,
127
+ attention_dropout=0.0,
128
+ hidden_act="silu",
129
+ max_position_embeddings=4096,
130
+ original_max_position_embeddings=4096,
131
+ initializer_range=0.02,
132
+ rms_norm_eps=1e-5,
133
+ use_cache=True,
134
+ tie_word_embeddings=False,
135
+ rope_theta=10000.0,
136
+ rope_scaling=None,
137
+ bos_token_id=1,
138
+ eos_token_id=32000,
139
+ pad_token_id=32000,
140
+ sliding_window=None,
141
+ **kwargs,
142
+ ):
143
+ self.vocab_size = vocab_size
144
+ self.hidden_size = hidden_size
145
+ self.intermediate_size = intermediate_size
146
+ self.num_hidden_layers = num_hidden_layers
147
+ self.num_attention_heads = num_attention_heads
148
+
149
+ if num_key_value_heads is None:
150
+ num_key_value_heads = num_attention_heads
151
+
152
+ self.num_key_value_heads = num_key_value_heads
153
+ self.resid_pdrop = resid_pdrop
154
+ self.embd_pdrop = embd_pdrop
155
+ self.attention_dropout = attention_dropout
156
+ self.hidden_act = hidden_act
157
+ self.max_position_embeddings = max_position_embeddings
158
+ self.original_max_position_embeddings = original_max_position_embeddings
159
+ self.initializer_range = initializer_range
160
+ self.rms_norm_eps = rms_norm_eps
161
+ self.use_cache = use_cache
162
+ self.rope_theta = rope_theta
163
+ self.rope_scaling = rope_scaling
164
+ self._rope_scaling_validation()
165
+ self.sliding_window = sliding_window
166
+
167
+ super().__init__(
168
+ bos_token_id=bos_token_id,
169
+ eos_token_id=eos_token_id,
170
+ pad_token_id=pad_token_id,
171
+ tie_word_embeddings=tie_word_embeddings,
172
+ **kwargs,
173
+ )
174
+
175
+ def _rope_scaling_validation(self):
176
+ """
177
+ Validate the `rope_scaling` configuration.
178
+ """
179
+ if self.rope_scaling is None:
180
+ return
181
+
182
+ if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 3:
183
+ raise ValueError(
184
+ "`rope_scaling` must be a dictionary with three fields, `type`, `short_factor` and `long_factor`, "
185
+ f"got {self.rope_scaling}"
186
+ )
187
+ rope_scaling_type = self.rope_scaling.get("type", None)
188
+ rope_scaling_short_factor = self.rope_scaling.get("short_factor", None)
189
+ rope_scaling_long_factor = self.rope_scaling.get("long_factor", None)
190
+ if rope_scaling_type is None or rope_scaling_type not in ["su", "yarn"]:
191
+ raise ValueError(f"`rope_scaling`'s type field must be one of ['su', 'yarn'], got {rope_scaling_type}")
192
+ if not (
193
+ isinstance(rope_scaling_short_factor, list)
194
+ and all(isinstance(x, (int, float)) for x in rope_scaling_short_factor)
195
+ ):
196
+ raise ValueError(
197
+ f"`rope_scaling`'s short_factor field must be a list of numbers, got {rope_scaling_short_factor}"
198
+ )
199
+ if not len(rope_scaling_short_factor) == self.hidden_size // self.num_attention_heads // 2:
200
+ raise ValueError(
201
+ f"`rope_scaling`'s short_factor field must have length {self.hidden_size // self.num_attention_heads // 2}, got {len(rope_scaling_short_factor)}"
202
+ )
203
+ if not (
204
+ isinstance(rope_scaling_long_factor, list)
205
+ and all(isinstance(x, (int, float)) for x in rope_scaling_long_factor)
206
+ ):
207
+ raise ValueError(
208
+ f"`rope_scaling`'s long_factor field must be a list of numbers, got {rope_scaling_long_factor}"
209
+ )
210
+ if not len(rope_scaling_long_factor) == self.hidden_size // self.num_attention_heads // 2:
211
+ raise ValueError(
212
+ f"`rope_scaling`'s long_factor field must have length {self.hidden_size // self.num_attention_heads // 2}, got {len(rope_scaling_long_factor)}"
213
+ )
directml-int4-awq-block-128/added_tokens.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "<|assistant|>": 32001,
3
+ "<|endoftext|>": 32000,
4
+ "<|end|>": 32007,
5
+ "<|placeholder1|>": 32002,
6
+ "<|placeholder2|>": 32003,
7
+ "<|placeholder3|>": 32004,
8
+ "<|placeholder4|>": 32005,
9
+ "<|placeholder5|>": 32008,
10
+ "<|placeholder6|>": 32009,
11
+ "<|system|>": 32006,
12
+ "<|user|>": 32010
13
+ }
directml-int4-awq-block-128/config.json ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Phi-3-medium-128k-instruct",
3
+ "architectures": [
4
+ "Phi3ForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "auto_map": {
8
+ "AutoConfig": "configuration_phi3.Phi3Config",
9
+ "AutoModelForCausalLM": "modeling_phi3.Phi3ForCausalLM"
10
+ },
11
+ "bos_token_id": 1,
12
+ "embd_pdrop": 0.0,
13
+ "eos_token_id": 32000,
14
+ "hidden_act": "silu",
15
+ "hidden_size": 5120,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 17920,
18
+ "max_position_embeddings": 131072,
19
+ "model_type": "phi3",
20
+ "num_attention_heads": 40,
21
+ "num_hidden_layers": 40,
22
+ "num_key_value_heads": 10,
23
+ "original_max_position_embeddings": 4096,
24
+ "pad_token_id": null,
25
+ "resid_pdrop": 0.0,
26
+ "rms_norm_eps": 1e-05,
27
+ "rope_scaling": {
28
+ "long_factor": [
29
+ 1.0,
30
+ 1.0,
31
+ 1.0,
32
+ 1.0,
33
+ 1.0,
34
+ 1.0,
35
+ 1.0,
36
+ 1.0,
37
+ 1.0,
38
+ 1.0,
39
+ 1.0,
40
+ 1.0,
41
+ 1.0,
42
+ 1.25,
43
+ 1.25,
44
+ 1.5,
45
+ 2.0,
46
+ 2.75,
47
+ 5.75,
48
+ 5.75,
49
+ 6.5,
50
+ 9.25,
51
+ 11.0,
52
+ 13.25,
53
+ 19.25,
54
+ 19.75,
55
+ 19.75,
56
+ 21.25,
57
+ 21.5,
58
+ 26.5,
59
+ 30.0,
60
+ 33.75,
61
+ 35.25,
62
+ 38.5,
63
+ 42.0,
64
+ 42.25,
65
+ 46.0,
66
+ 47.0,
67
+ 50.0,
68
+ 50.5,
69
+ 51.0,
70
+ 52.0,
71
+ 52.75,
72
+ 53.75,
73
+ 54.75,
74
+ 57.0,
75
+ 57.25,
76
+ 58.5,
77
+ 59.25,
78
+ 59.5,
79
+ 62.0,
80
+ 62.5,
81
+ 62.75,
82
+ 63.25,
83
+ 63.25,
84
+ 63.25,
85
+ 63.75,
86
+ 64.0,
87
+ 64.0,
88
+ 64.25,
89
+ 64.5,
90
+ 64.5,
91
+ 65.0,
92
+ 65.0
93
+ ],
94
+ "short_factor": [
95
+ 1.0,
96
+ 1.0,
97
+ 1.0,
98
+ 1.0,
99
+ 1.0,
100
+ 1.0,
101
+ 1.01,
102
+ 1.02,
103
+ 1.02,
104
+ 1.04,
105
+ 1.04,
106
+ 1.07,
107
+ 1.07,
108
+ 1.1,
109
+ 1.3000000000000003,
110
+ 1.3000000000000003,
111
+ 1.5000000000000004,
112
+ 1.5700000000000005,
113
+ 1.9000000000000008,
114
+ 2.3100000000000014,
115
+ 2.759999999999992,
116
+ 3.3899999999999784,
117
+ 3.9399999999999666,
118
+ 4.009999999999965,
119
+ 4.289999999999959,
120
+ 4.349999999999958,
121
+ 5.349999999999937,
122
+ 6.659999999999909,
123
+ 7.029999999999901,
124
+ 7.51999999999989,
125
+ 8.00999999999988,
126
+ 8.249999999999876,
127
+ 8.279999999999875,
128
+ 9.629999999999846,
129
+ 9.89999999999984,
130
+ 10.589999999999826,
131
+ 11.049999999999816,
132
+ 11.7899999999998,
133
+ 12.189999999999792,
134
+ 12.889999999999777,
135
+ 13.129999999999772,
136
+ 13.16999999999977,
137
+ 13.20999999999977,
138
+ 13.479999999999764,
139
+ 13.539999999999763,
140
+ 13.779999999999758,
141
+ 13.929999999999755,
142
+ 14.429999999999744,
143
+ 14.759999999999737,
144
+ 15.149999999999729,
145
+ 15.419999999999723,
146
+ 15.53999999999972,
147
+ 15.659999999999718,
148
+ 15.749999999999716,
149
+ 15.759999999999716,
150
+ 15.799999999999715,
151
+ 16.05999999999971,
152
+ 16.079999999999714,
153
+ 16.11999999999972,
154
+ 16.11999999999972,
155
+ 16.18999999999973,
156
+ 16.31999999999975,
157
+ 16.539999999999786,
158
+ 16.799999999999827
159
+ ],
160
+ "type": "su"
161
+ },
162
+ "rope_theta": 10000.0,
163
+ "sliding_window": 131072,
164
+ "tie_word_embeddings": false,
165
+ "torch_dtype": "bfloat16",
166
+ "transformers_version": "4.39.3",
167
+ "use_cache": true,
168
+ "vocab_size": 32064
169
+ }
directml-int4-awq-block-128/configuration_phi3.py ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 Microsoft and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """ Phi-3 model configuration"""
17
+
18
+
19
+ from transformers.configuration_utils import PretrainedConfig
20
+ from transformers.utils import logging
21
+
22
+
23
+ logger = logging.get_logger(__name__)
24
+
25
+ PHI3_PRETRAINED_CONFIG_ARCHIVE_MAP = {
26
+ "microsoft/Phi-3-mini-4k-instruct": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/config.json",
27
+ "microsoft/Phi-3-mini-128k-instruct": "https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/config.json",
28
+ }
29
+
30
+
31
+ class Phi3Config(PretrainedConfig):
32
+ r"""
33
+ This is the configuration class to store the configuration of a [`Phi3Model`]. It is used to instantiate a Phi-3
34
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
35
+ defaults will yield a similar configuration to that of the
36
+ [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
37
+
38
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
39
+ documentation from [`PretrainedConfig`] for more information.
40
+
41
+ Args:
42
+ vocab_size (`int`, *optional*, defaults to 32064):
43
+ Vocabulary size of the Phi-3 model. Defines the number of different tokens that can be represented by the
44
+ `inputs_ids` passed when calling [`Phi3Model`].
45
+ hidden_size (`int`, *optional*, defaults to 3072):
46
+ Dimension of the hidden representations.
47
+ intermediate_size (`int`, *optional*, defaults to 8192):
48
+ Dimension of the MLP representations.
49
+ num_hidden_layers (`int`, *optional*, defaults to 32):
50
+ Number of hidden layers in the Transformer decoder.
51
+ num_attention_heads (`int`, *optional*, defaults to 32):
52
+ Number of attention heads for each attention layer in the Transformer decoder.
53
+ num_key_value_heads (`int`, *optional*):
54
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
55
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
56
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
57
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
58
+ by meanpooling all the original heads within that group. For more details checkout [this
59
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
60
+ `num_attention_heads`.
61
+ resid_pdrop (`float`, *optional*, defaults to 0.0):
62
+ Dropout probability for mlp outputs.
63
+ embd_pdrop (`int`, *optional*, defaults to 0.0):
64
+ The dropout ratio for the embeddings.
65
+ attention_dropout (`float`, *optional*, defaults to 0.0):
66
+ The dropout ratio after computing the attention scores.
67
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
68
+ The non-linear activation function (function or string) in the decoder.
69
+ max_position_embeddings (`int`, *optional*, defaults to 4096):
70
+ The maximum sequence length that this model might ever be used with.
71
+ original_max_position_embeddings (`int`, *optional*, defaults to 4096):
72
+ The maximum sequence length that this model was trained with. This is used to determine the size of the
73
+ original RoPE embeddings when using long scaling.
74
+ initializer_range (`float`, *optional*, defaults to 0.02):
75
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
76
+ rms_norm_eps (`float`, *optional*, defaults to 1e-05):
77
+ The epsilon value used for the RMSNorm.
78
+ use_cache (`bool`, *optional*, defaults to `True`):
79
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
80
+ relevant if `config.is_decoder=True`. Whether to tie weight embeddings or not.
81
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
82
+ Whether to tie weight embeddings
83
+ rope_theta (`float`, *optional*, defaults to 10000.0):
84
+ The base period of the RoPE embeddings.
85
+ rope_scaling (`dict`, *optional*):
86
+ The scaling strategy for the RoPE embeddings. If `None`, no scaling is applied. If a dictionary, it must
87
+ contain the following keys: `type`, `short_factor` and `long_factor`. The `type` must be either `su` or `yarn` and
88
+ the `short_factor` and `long_factor` must be lists of numbers with the same length as the hidden size
89
+ divided by the number of attention heads divided by 2.
90
+ bos_token_id (`int`, *optional*, defaults to 1):
91
+ The id of the "beginning-of-sequence" token.
92
+ eos_token_id (`int`, *optional*, defaults to 32000):
93
+ The id of the "end-of-sequence" token.
94
+ pad_token_id (`int`, *optional*, defaults to 32000):
95
+ The id of the padding token.
96
+ sliding_window (`int`, *optional*):
97
+ Sliding window attention window size. If `None`, no sliding window is applied.
98
+
99
+ Example:
100
+
101
+ ```python
102
+ >>> from transformers import Phi3Model, Phi3Config
103
+
104
+ >>> # Initializing a Phi-3 style configuration
105
+ >>> configuration = Phi3Config.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
106
+
107
+ >>> # Initializing a model from the configuration
108
+ >>> model = Phi3Model(configuration)
109
+
110
+ >>> # Accessing the model configuration
111
+ >>> configuration = model.config
112
+ ```"""
113
+
114
+ model_type = "phi3"
115
+ keys_to_ignore_at_inference = ["past_key_values"]
116
+
117
+ def __init__(
118
+ self,
119
+ vocab_size=32064,
120
+ hidden_size=3072,
121
+ intermediate_size=8192,
122
+ num_hidden_layers=32,
123
+ num_attention_heads=32,
124
+ num_key_value_heads=None,
125
+ resid_pdrop=0.0,
126
+ embd_pdrop=0.0,
127
+ attention_dropout=0.0,
128
+ hidden_act="silu",
129
+ max_position_embeddings=4096,
130
+ original_max_position_embeddings=4096,
131
+ initializer_range=0.02,
132
+ rms_norm_eps=1e-5,
133
+ use_cache=True,
134
+ tie_word_embeddings=False,
135
+ rope_theta=10000.0,
136
+ rope_scaling=None,
137
+ bos_token_id=1,
138
+ eos_token_id=32000,
139
+ pad_token_id=32000,
140
+ sliding_window=None,
141
+ **kwargs,
142
+ ):
143
+ self.vocab_size = vocab_size
144
+ self.hidden_size = hidden_size
145
+ self.intermediate_size = intermediate_size
146
+ self.num_hidden_layers = num_hidden_layers
147
+ self.num_attention_heads = num_attention_heads
148
+
149
+ if num_key_value_heads is None:
150
+ num_key_value_heads = num_attention_heads
151
+
152
+ self.num_key_value_heads = num_key_value_heads
153
+ self.resid_pdrop = resid_pdrop
154
+ self.embd_pdrop = embd_pdrop
155
+ self.attention_dropout = attention_dropout
156
+ self.hidden_act = hidden_act
157
+ self.max_position_embeddings = max_position_embeddings
158
+ self.original_max_position_embeddings = original_max_position_embeddings
159
+ self.initializer_range = initializer_range
160
+ self.rms_norm_eps = rms_norm_eps
161
+ self.use_cache = use_cache
162
+ self.rope_theta = rope_theta
163
+ self.rope_scaling = rope_scaling
164
+ self._rope_scaling_validation()
165
+ self.sliding_window = sliding_window
166
+
167
+ super().__init__(
168
+ bos_token_id=bos_token_id,
169
+ eos_token_id=eos_token_id,
170
+ pad_token_id=pad_token_id,
171
+ tie_word_embeddings=tie_word_embeddings,
172
+ **kwargs,
173
+ )
174
+
175
+ def _rope_scaling_validation(self):
176
+ """
177
+ Validate the `rope_scaling` configuration.
178
+ """
179
+ if self.rope_scaling is None:
180
+ return
181
+
182
+ if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 3:
183
+ raise ValueError(
184
+ "`rope_scaling` must be a dictionary with three fields, `type`, `short_factor` and `long_factor`, "
185
+ f"got {self.rope_scaling}"
186
+ )
187
+ rope_scaling_type = self.rope_scaling.get("type", None)
188
+ rope_scaling_short_factor = self.rope_scaling.get("short_factor", None)
189
+ rope_scaling_long_factor = self.rope_scaling.get("long_factor", None)
190
+ if rope_scaling_type is None or rope_scaling_type not in ["su", "yarn"]:
191
+ raise ValueError(f"`rope_scaling`'s type field must be one of ['su', 'yarn'], got {rope_scaling_type}")
192
+ if not (
193
+ isinstance(rope_scaling_short_factor, list)
194
+ and all(isinstance(x, (int, float)) for x in rope_scaling_short_factor)
195
+ ):
196
+ raise ValueError(
197
+ f"`rope_scaling`'s short_factor field must be a list of numbers, got {rope_scaling_short_factor}"
198
+ )
199
+ if not len(rope_scaling_short_factor) == self.hidden_size // self.num_attention_heads // 2:
200
+ raise ValueError(
201
+ f"`rope_scaling`'s short_factor field must have length {self.hidden_size // self.num_attention_heads // 2}, got {len(rope_scaling_short_factor)}"
202
+ )
203
+ if not (
204
+ isinstance(rope_scaling_long_factor, list)
205
+ and all(isinstance(x, (int, float)) for x in rope_scaling_long_factor)
206
+ ):
207
+ raise ValueError(
208
+ f"`rope_scaling`'s long_factor field must be a list of numbers, got {rope_scaling_long_factor}"
209
+ )
210
+ if not len(rope_scaling_long_factor) == self.hidden_size // self.num_attention_heads // 2:
211
+ raise ValueError(
212
+ f"`rope_scaling`'s long_factor field must have length {self.hidden_size // self.num_attention_heads // 2}, got {len(rope_scaling_long_factor)}"
213
+ )
directml-int4-awq-block-128/genai_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model": {
3
+ "bos_token_id": 1,
4
+ "context_length": 131072,
5
+ "decoder": {
6
+ "session_options": {
7
+ "log_id": "onnxruntime-genai",
8
+ "provider_options": [
9
+ {
10
+ "dml": {}
11
+ }
12
+ ]
13
+ },
14
+ "filename": "model.onnx",
15
+ "head_size": 128,
16
+ "hidden_size": 5120,
17
+ "inputs": {
18
+ "input_ids": "input_ids",
19
+ "attention_mask": "attention_mask",
20
+ "position_ids": "position_ids",
21
+ "past_key_names": "past_key_values.%d.key",
22
+ "past_value_names": "past_key_values.%d.value"
23
+ },
24
+ "outputs": {
25
+ "logits": "logits",
26
+ "present_key_names": "present.%d.key",
27
+ "present_value_names": "present.%d.value"
28
+ },
29
+ "num_attention_heads": 40,
30
+ "num_hidden_layers": 40,
31
+ "num_key_value_heads": 10
32
+ },
33
+ "eos_token_id": [
34
+ 32000,
35
+ 32001,
36
+ 32007
37
+ ],
38
+ "pad_token_id": 32000,
39
+ "type": "phi3",
40
+ "vocab_size": 32064
41
+ },
42
+ "search": {
43
+ "diversity_penalty": 0.0,
44
+ "do_sample": false,
45
+ "early_stopping": true,
46
+ "length_penalty": 1.0,
47
+ "max_length": 131072,
48
+ "min_length": 0,
49
+ "no_repeat_ngram_size": 0,
50
+ "num_beams": 1,
51
+ "num_return_sequences": 1,
52
+ "past_present_share_buffer": true,
53
+ "repetition_penalty": 1.0,
54
+ "temperature": 1.0,
55
+ "top_k": 1,
56
+ "top_p": 1.0
57
+ }
58
+ }
directml-int4-awq-block-128/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:daa6891ea212f8ccd9adbc0e00c8cab59a7be841cd6b13e1fa21ba97d86a6d4c
3
+ size 44520402
directml-int4-awq-block-128/model.onnx.data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b1f7da3ad6d4f56f51ddc54ab71b8e97f8e42bc8c78e0210f0b18303fdd9280
3
+ size 7496439040
directml-int4-awq-block-128/special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<|endoftext|>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
directml-int4-awq-block-128/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
directml-int4-awq-block-128/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
directml-int4-awq-block-128/tokenizer_config.json ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": true,
26
+ "single_word": false,
27
+ "special": false
28
+ },
29
+ "32000": {
30
+ "content": "<|endoftext|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "32001": {
38
+ "content": "<|assistant|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": true,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "32002": {
46
+ "content": "<|placeholder1|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": true,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "32003": {
54
+ "content": "<|placeholder2|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": true,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "32004": {
62
+ "content": "<|placeholder3|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": true,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "32005": {
70
+ "content": "<|placeholder4|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": true,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "32006": {
78
+ "content": "<|system|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": true,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "32007": {
86
+ "content": "<|end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": true,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "32008": {
94
+ "content": "<|placeholder5|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": true,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "32009": {
102
+ "content": "<|placeholder6|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": true,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "32010": {
110
+ "content": "<|user|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": true,
114
+ "single_word": false,
115
+ "special": true
116
+ }
117
+ },
118
+ "bos_token": "<s>",
119
+ "chat_template": "{% for message in messages %}{% if (message['role'] == 'user') %}{{'<|user|>' + '\n' + message['content'] + '<|end|>' + '\n' + '<|assistant|>' + '\n'}}{% elif (message['role'] == 'assistant') %}{{message['content'] + '<|end|>' + '\n'}}{% endif %}{% endfor %}",
120
+ "clean_up_tokenization_spaces": false,
121
+ "eos_token": "<|endoftext|>",
122
+ "legacy": false,
123
+ "model_max_length": 131072,
124
+ "pad_token": "<|endoftext|>",
125
+ "padding_side": "left",
126
+ "sp_model_kwargs": {},
127
+ "tokenizer_class": "LlamaTokenizer",
128
+ "unk_token": "<unk>",
129
+ "use_default_system_prompt": false
130
+ }