ancient-ancient
commited on
Commit
•
ccd0472
1
Parent(s):
549d4b9
initial commit
Browse files- README.md +97 -0
- added_tokens.json +3 -0
- config.json +26 -0
- generation_config.json +9 -0
- pytorch_model.bin +3 -0
- special_tokens_map.json +6 -0
- tokenizer.model +3 -0
- tokenizer_config.json +35 -0
- zero_to_fp32.py +578 -0
README.md
ADDED
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama2
|
3 |
+
---
|
4 |
+
This is the **Full-Weight** of WizardLM-13B V1.2 model, this model is trained from **Llama-2 13b**.
|
5 |
+
|
6 |
+
## WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions
|
7 |
+
|
8 |
+
|
9 |
+
|
10 |
+
<p align="center">
|
11 |
+
🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
|
12 |
+
</p>
|
13 |
+
<p align="center">
|
14 |
+
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
|
15 |
+
</p>
|
16 |
+
|
17 |
+
## News
|
18 |
+
|
19 |
+
- 🔥🔥🔥[2023/08/26] We released **WizardCoder-Python-34B-V1.0** , which achieves the **73.2 pass@1** and surpasses **GPT4 (2023/03/15)**, **ChatGPT-3.5**, and **Claude2** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). For more details, please refer to [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder).
|
20 |
+
- [2023/06/16] We released **WizardCoder-15B-V1.0** , which surpasses **Claude-Plus (+6.8)**, **Bard (+15.3)** and **InstructCodeT5+ (+22.3)** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). For more details, please refer to [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder).
|
21 |
+
|
22 |
+
| Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License |
|
23 |
+
| ----- |------| ---- |------|-------| ----- | ----- |
|
24 |
+
| WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
|
25 |
+
| WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
|
26 |
+
| WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
|
27 |
+
| WizardCoder-Python-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | 51.6 | [Demo](http://47.103.63.15:50088/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
|
28 |
+
| WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
|
29 |
+
| WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
|
30 |
+
|
31 |
+
|
32 |
+
- 🔥 [08/11/2023] We release **WizardMath** Models.
|
33 |
+
- 🔥 Our **WizardMath-70B-V1.0** model slightly outperforms some closed-source LLMs on the GSM8K, including **ChatGPT 3.5**, **Claude Instant 1** and **PaLM 2 540B**.
|
34 |
+
- 🔥 Our **WizardMath-70B-V1.0** model achieves **81.6 pass@1** on the [GSM8k Benchmarks](https://github.com/openai/grade-school-math), which is **24.8** points higher than the SOTA open-source LLM.
|
35 |
+
- 🔥 Our **WizardMath-70B-V1.0** model achieves **22.7 pass@1** on the [MATH Benchmarks](https://github.com/hendrycks/math), which is **9.2** points higher than the SOTA open-source LLM.
|
36 |
+
|
37 |
+
|
38 |
+
| Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
|
39 |
+
| ----- |------| ---- |------|-------| ----- | ----- |
|
40 |
+
| WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
|
41 |
+
| WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
|
42 |
+
| WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>|
|
43 |
+
|
44 |
+
|
45 |
+
<font size=4>
|
46 |
+
|
47 |
+
| <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>WizardEval</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
|
48 |
+
| ----- |------| ---- |------|-------| ----- | ----- | ----- |
|
49 |
+
| <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> | <sup>101.4% </sup>|<sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
|
50 |
+
| <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | <sup>99.3% </sup> |<sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>|
|
51 |
+
| <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | <sup>97.8% </sup> | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> |
|
52 |
+
| <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | <sup>89.1% </sup> |<sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>|
|
53 |
+
| <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | <sup>78.0% </sup> |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>|
|
54 |
+
</font>
|
55 |
+
|
56 |
+
**Repository**: https://github.com/nlpxucan/WizardLM
|
57 |
+
|
58 |
+
**Twitter**:
|
59 |
+
|
60 |
+
|
61 |
+
- 🔥🔥🔥 [7/25/2023] We released **WizardLM V1.2** models. The **WizardLM-13B-V1.2** is here ([Demo_13B-V1.2](https://b7a19878988c8c73.gradio.app), [Demo_13B-V1.2_bak-1](https://d0a37a76e0ac4b52.gradio.app/), [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-13B-V1.2)). Please checkout the [paper](https://arxiv.org/abs/2304.12244).
|
62 |
+
- 🔥🔥🔥 [7/25/2023] The **WizardLM-13B-V1.2** achieves **7.06** on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), **89.17%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), and **101.4%** on [WizardLM Eval](https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/data/WizardLM_testset.jsonl). (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
|
63 |
+
|
64 |
+
❗<b>Note for model system prompts usage:</b>
|
65 |
+
|
66 |
+
|
67 |
+
<b>WizardLM</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
|
68 |
+
|
69 |
+
```
|
70 |
+
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am WizardLM.</s>......
|
71 |
+
```
|
72 |
+
|
73 |
+
## Inference WizardLM Demo Script
|
74 |
+
|
75 |
+
We provide the inference WizardLM demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo).
|
76 |
+
|
77 |
+
Please cite the paper if you use the data or code from WizardLM.
|
78 |
+
|
79 |
+
```
|
80 |
+
@article{xu2023wizardlm,
|
81 |
+
title={Wizardlm: Empowering large language models to follow complex instructions},
|
82 |
+
author={Xu, Can and Sun, Qingfeng and Zheng, Kai and Geng, Xiubo and Zhao, Pu and Feng, Jiazhan and Tao, Chongyang and Jiang, Daxin},
|
83 |
+
journal={arXiv preprint arXiv:2304.12244},
|
84 |
+
year={2023}
|
85 |
+
}
|
86 |
+
```
|
87 |
+
|
88 |
+
❗<b>To commen concern about dataset:</b>
|
89 |
+
|
90 |
+
Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models.
|
91 |
+
|
92 |
+
|
93 |
+
Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .
|
94 |
+
|
95 |
+
Our researchers have no authority to publicly release them without authorization.
|
96 |
+
|
97 |
+
Thank you for your understanding.
|
added_tokens.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"<pad>": 32000
|
3 |
+
}
|
config.json
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "//workspaceblobstore/caxu/llama_new/Llama-2-13b-chat-hf",
|
3 |
+
"architectures": [
|
4 |
+
"LlamaForCausalLM"
|
5 |
+
],
|
6 |
+
"bos_token_id": 1,
|
7 |
+
"eos_token_id": 2,
|
8 |
+
"hidden_act": "silu",
|
9 |
+
"hidden_size": 5120,
|
10 |
+
"initializer_range": 0.02,
|
11 |
+
"intermediate_size": 13824,
|
12 |
+
"max_position_embeddings": 4096,
|
13 |
+
"model_type": "llama",
|
14 |
+
"num_attention_heads": 40,
|
15 |
+
"num_hidden_layers": 40,
|
16 |
+
"num_key_value_heads": 40,
|
17 |
+
"pad_token_id": 0,
|
18 |
+
"pretraining_tp": 2,
|
19 |
+
"rms_norm_eps": 1e-05,
|
20 |
+
"rope_scaling": null,
|
21 |
+
"tie_word_embeddings": false,
|
22 |
+
"torch_dtype": "float16",
|
23 |
+
"transformers_version": "4.29.2",
|
24 |
+
"use_cache": false,
|
25 |
+
"vocab_size": 32000
|
26 |
+
}
|
generation_config.json
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"bos_token_id": 1,
|
4 |
+
"eos_token_id": 2,
|
5 |
+
"pad_token_id": 0,
|
6 |
+
"temperature": 0.9,
|
7 |
+
"top_p": 0.6,
|
8 |
+
"transformers_version": "4.29.2"
|
9 |
+
}
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:62ecfc211d12b4e18ec63430f799ecd284f1aababb1603a403332dc8aa90ee47
|
3 |
+
size 26031865519
|
special_tokens_map.json
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": "</s>",
|
3 |
+
"eos_token": "</s>",
|
4 |
+
"pad_token": "<unk>",
|
5 |
+
"unk_token": "</s>"
|
6 |
+
}
|
tokenizer.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
|
3 |
+
size 499723
|
tokenizer_config.json
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_bos_token": true,
|
3 |
+
"add_eos_token": false,
|
4 |
+
"bos_token": {
|
5 |
+
"__type": "AddedToken",
|
6 |
+
"content": "<s>",
|
7 |
+
"lstrip": false,
|
8 |
+
"normalized": true,
|
9 |
+
"rstrip": false,
|
10 |
+
"single_word": false
|
11 |
+
},
|
12 |
+
"clean_up_tokenization_spaces": false,
|
13 |
+
"eos_token": {
|
14 |
+
"__type": "AddedToken",
|
15 |
+
"content": "</s>",
|
16 |
+
"lstrip": false,
|
17 |
+
"normalized": true,
|
18 |
+
"rstrip": false,
|
19 |
+
"single_word": false
|
20 |
+
},
|
21 |
+
"legacy": false,
|
22 |
+
"model_max_length": 2048,
|
23 |
+
"pad_token": null,
|
24 |
+
"padding_side": "right",
|
25 |
+
"sp_model_kwargs": {},
|
26 |
+
"tokenizer_class": "LlamaTokenizer",
|
27 |
+
"unk_token": {
|
28 |
+
"__type": "AddedToken",
|
29 |
+
"content": "<unk>",
|
30 |
+
"lstrip": false,
|
31 |
+
"normalized": true,
|
32 |
+
"rstrip": false,
|
33 |
+
"single_word": false
|
34 |
+
}
|
35 |
+
}
|
zero_to_fp32.py
ADDED
@@ -0,0 +1,578 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python
|
2 |
+
|
3 |
+
# Copyright (c) Microsoft Corporation.
|
4 |
+
# SPDX-License-Identifier: Apache-2.0
|
5 |
+
|
6 |
+
# DeepSpeed Team
|
7 |
+
|
8 |
+
# This script extracts fp32 consolidated weights from a zero 2 and 3 DeepSpeed checkpoints. It gets
|
9 |
+
# copied into the top level checkpoint dir, so the user can easily do the conversion at any point in
|
10 |
+
# the future. Once extracted, the weights don't require DeepSpeed and can be used in any
|
11 |
+
# application.
|
12 |
+
#
|
13 |
+
# example: python zero_to_fp32.py . pytorch_model.bin
|
14 |
+
|
15 |
+
import argparse
|
16 |
+
import torch
|
17 |
+
import glob
|
18 |
+
import math
|
19 |
+
import os
|
20 |
+
import re
|
21 |
+
from collections import OrderedDict
|
22 |
+
from dataclasses import dataclass
|
23 |
+
|
24 |
+
# while this script doesn't use deepspeed to recover data, since the checkpoints are pickled with
|
25 |
+
# DeepSpeed data structures it has to be available in the current python environment.
|
26 |
+
from deepspeed.utils import logger
|
27 |
+
from deepspeed.checkpoint.constants import (DS_VERSION, OPTIMIZER_STATE_DICT, SINGLE_PARTITION_OF_FP32_GROUPS,
|
28 |
+
FP32_FLAT_GROUPS, ZERO_STAGE, PARTITION_COUNT, PARAM_SHAPES, BUFFER_NAMES,
|
29 |
+
FROZEN_PARAM_SHAPES, FROZEN_PARAM_FRAGMENTS)
|
30 |
+
|
31 |
+
|
32 |
+
@dataclass
|
33 |
+
class zero_model_state:
|
34 |
+
buffers: dict()
|
35 |
+
param_shapes: dict()
|
36 |
+
shared_params: list
|
37 |
+
ds_version: int
|
38 |
+
frozen_param_shapes: dict()
|
39 |
+
frozen_param_fragments: dict()
|
40 |
+
|
41 |
+
|
42 |
+
debug = 0
|
43 |
+
|
44 |
+
# load to cpu
|
45 |
+
device = torch.device('cpu')
|
46 |
+
|
47 |
+
|
48 |
+
def atoi(text):
|
49 |
+
return int(text) if text.isdigit() else text
|
50 |
+
|
51 |
+
|
52 |
+
def natural_keys(text):
|
53 |
+
'''
|
54 |
+
alist.sort(key=natural_keys) sorts in human order
|
55 |
+
http://nedbatchelder.com/blog/200712/human_sorting.html
|
56 |
+
(See Toothy's implementation in the comments)
|
57 |
+
'''
|
58 |
+
return [atoi(c) for c in re.split(r'(\d+)', text)]
|
59 |
+
|
60 |
+
|
61 |
+
def get_model_state_file(checkpoint_dir, zero_stage):
|
62 |
+
if not os.path.isdir(checkpoint_dir):
|
63 |
+
raise FileNotFoundError(f"Directory '{checkpoint_dir}' doesn't exist")
|
64 |
+
|
65 |
+
# there should be only one file
|
66 |
+
if zero_stage == 2:
|
67 |
+
file = os.path.join(checkpoint_dir, "mp_rank_00_model_states.pt")
|
68 |
+
elif zero_stage == 3:
|
69 |
+
file = os.path.join(checkpoint_dir, "zero_pp_rank_0_mp_rank_00_model_states.pt")
|
70 |
+
|
71 |
+
if not os.path.exists(file):
|
72 |
+
raise FileNotFoundError(f"can't find model states file at '{file}'")
|
73 |
+
|
74 |
+
return file
|
75 |
+
|
76 |
+
|
77 |
+
def get_checkpoint_files(checkpoint_dir, glob_pattern):
|
78 |
+
# XXX: need to test that this simple glob rule works for multi-node setup too
|
79 |
+
ckpt_files = sorted(glob.glob(os.path.join(checkpoint_dir, glob_pattern)), key=natural_keys)
|
80 |
+
|
81 |
+
if len(ckpt_files) == 0:
|
82 |
+
raise FileNotFoundError(f"can't find {glob_pattern} files in directory '{checkpoint_dir}'")
|
83 |
+
|
84 |
+
return ckpt_files
|
85 |
+
|
86 |
+
|
87 |
+
def get_optim_files(checkpoint_dir):
|
88 |
+
return get_checkpoint_files(checkpoint_dir, "*_optim_states.pt")
|
89 |
+
|
90 |
+
|
91 |
+
def get_model_state_files(checkpoint_dir):
|
92 |
+
return get_checkpoint_files(checkpoint_dir, "*_model_states.pt")
|
93 |
+
|
94 |
+
|
95 |
+
def parse_model_states(files):
|
96 |
+
zero_model_states = []
|
97 |
+
for file in files:
|
98 |
+
state_dict = torch.load(file, map_location=device)
|
99 |
+
|
100 |
+
if BUFFER_NAMES not in state_dict:
|
101 |
+
raise ValueError(f"{file} is not a model state checkpoint")
|
102 |
+
buffer_names = state_dict[BUFFER_NAMES]
|
103 |
+
if debug:
|
104 |
+
print("Found buffers:", buffer_names)
|
105 |
+
|
106 |
+
# recover just the buffers while restoring them to fp32 if they were saved in fp16
|
107 |
+
buffers = {k: v.float() for k, v in state_dict["module"].items() if k in buffer_names}
|
108 |
+
param_shapes = state_dict[PARAM_SHAPES]
|
109 |
+
|
110 |
+
# collect parameters that are included in param_shapes
|
111 |
+
param_names = []
|
112 |
+
for s in param_shapes:
|
113 |
+
for name in s.keys():
|
114 |
+
param_names.append(name)
|
115 |
+
|
116 |
+
# update with frozen parameters
|
117 |
+
frozen_param_shapes = state_dict.get(FROZEN_PARAM_SHAPES, None)
|
118 |
+
if frozen_param_shapes is not None:
|
119 |
+
if debug:
|
120 |
+
print(f"Found frozen_param_shapes: {frozen_param_shapes}")
|
121 |
+
param_names += list(frozen_param_shapes.keys())
|
122 |
+
|
123 |
+
# handle shared params
|
124 |
+
shared_params = [[k, v] for k, v in state_dict["shared_params"].items()]
|
125 |
+
|
126 |
+
ds_version = state_dict.get(DS_VERSION, None)
|
127 |
+
|
128 |
+
frozen_param_fragments = state_dict.get(FROZEN_PARAM_FRAGMENTS, None)
|
129 |
+
|
130 |
+
z_model_state = zero_model_state(buffers=buffers,
|
131 |
+
param_shapes=param_shapes,
|
132 |
+
shared_params=shared_params,
|
133 |
+
ds_version=ds_version,
|
134 |
+
frozen_param_shapes=frozen_param_shapes,
|
135 |
+
frozen_param_fragments=frozen_param_fragments)
|
136 |
+
zero_model_states.append(z_model_state)
|
137 |
+
|
138 |
+
return zero_model_states
|
139 |
+
|
140 |
+
|
141 |
+
def parse_optim_states(files, ds_checkpoint_dir):
|
142 |
+
|
143 |
+
total_files = len(files)
|
144 |
+
state_dicts = []
|
145 |
+
for f in files:
|
146 |
+
state_dicts.append(torch.load(f, map_location=device))
|
147 |
+
|
148 |
+
if not ZERO_STAGE in state_dicts[0][OPTIMIZER_STATE_DICT]:
|
149 |
+
raise ValueError(f"{files[0]} is not a zero checkpoint")
|
150 |
+
zero_stage = state_dicts[0][OPTIMIZER_STATE_DICT][ZERO_STAGE]
|
151 |
+
world_size = state_dicts[0][OPTIMIZER_STATE_DICT][PARTITION_COUNT]
|
152 |
+
|
153 |
+
# For ZeRO-2 each param group can have different partition_count as data parallelism for expert
|
154 |
+
# parameters can be different from data parallelism for non-expert parameters. So we can just
|
155 |
+
# use the max of the partition_count to get the dp world_size.
|
156 |
+
|
157 |
+
if type(world_size) is list:
|
158 |
+
world_size = max(world_size)
|
159 |
+
|
160 |
+
if world_size != total_files:
|
161 |
+
raise ValueError(
|
162 |
+
f"Expected {world_size} of '*_optim_states.pt' under '{ds_checkpoint_dir}' but found {total_files} files. "
|
163 |
+
"Possibly due to an overwrite of an old checkpoint, or a checkpoint didn't get saved by one or more processes."
|
164 |
+
)
|
165 |
+
|
166 |
+
# the groups are named differently in each stage
|
167 |
+
if zero_stage == 2:
|
168 |
+
fp32_groups_key = SINGLE_PARTITION_OF_FP32_GROUPS
|
169 |
+
elif zero_stage == 3:
|
170 |
+
fp32_groups_key = FP32_FLAT_GROUPS
|
171 |
+
else:
|
172 |
+
raise ValueError(f"unknown zero stage {zero_stage}")
|
173 |
+
|
174 |
+
if zero_stage == 2:
|
175 |
+
fp32_flat_groups = [state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key] for i in range(len(state_dicts))]
|
176 |
+
elif zero_stage == 3:
|
177 |
+
# if there is more than one param group, there will be multiple flattened tensors - one
|
178 |
+
# flattened tensor per group - for simplicity merge them into a single tensor
|
179 |
+
#
|
180 |
+
# XXX: could make the script more memory efficient for when there are multiple groups - it
|
181 |
+
# will require matching the sub-lists of param_shapes for each param group flattened tensor
|
182 |
+
|
183 |
+
fp32_flat_groups = [
|
184 |
+
torch.cat(state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key], 0) for i in range(len(state_dicts))
|
185 |
+
]
|
186 |
+
|
187 |
+
return zero_stage, world_size, fp32_flat_groups
|
188 |
+
|
189 |
+
|
190 |
+
def _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir):
|
191 |
+
"""
|
192 |
+
Returns fp32 state_dict reconstructed from ds checkpoint
|
193 |
+
|
194 |
+
Args:
|
195 |
+
- ``ds_checkpoint_dir``: path to the deepspeed checkpoint folder (where the optimizer files are)
|
196 |
+
|
197 |
+
"""
|
198 |
+
print(f"Processing zero checkpoint '{ds_checkpoint_dir}'")
|
199 |
+
|
200 |
+
optim_files = get_optim_files(ds_checkpoint_dir)
|
201 |
+
zero_stage, world_size, fp32_flat_groups = parse_optim_states(optim_files, ds_checkpoint_dir)
|
202 |
+
print(f"Detected checkpoint of type zero stage {zero_stage}, world_size: {world_size}")
|
203 |
+
|
204 |
+
model_files = get_model_state_files(ds_checkpoint_dir)
|
205 |
+
|
206 |
+
zero_model_states = parse_model_states(model_files)
|
207 |
+
print(f'Parsing checkpoint created by deepspeed=={zero_model_states[0].ds_version}')
|
208 |
+
|
209 |
+
if zero_stage == 2:
|
210 |
+
return _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states)
|
211 |
+
elif zero_stage == 3:
|
212 |
+
return _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states)
|
213 |
+
|
214 |
+
|
215 |
+
def _zero2_merge_frozen_params(state_dict, zero_model_states):
|
216 |
+
if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
|
217 |
+
return
|
218 |
+
|
219 |
+
frozen_param_shapes = zero_model_states[0].frozen_param_shapes
|
220 |
+
frozen_param_fragments = zero_model_states[0].frozen_param_fragments
|
221 |
+
|
222 |
+
if debug:
|
223 |
+
num_elem = sum(s.numel() for s in frozen_param_shapes.values())
|
224 |
+
print(f'rank 0: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
|
225 |
+
|
226 |
+
wanted_params = len(frozen_param_shapes)
|
227 |
+
wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
|
228 |
+
avail_numel = sum([p.numel() for p in frozen_param_fragments.values()])
|
229 |
+
print(f'Frozen params: Have {avail_numel} numels to process.')
|
230 |
+
print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
|
231 |
+
|
232 |
+
total_params = 0
|
233 |
+
total_numel = 0
|
234 |
+
for name, shape in frozen_param_shapes.items():
|
235 |
+
total_params += 1
|
236 |
+
unpartitioned_numel = shape.numel()
|
237 |
+
total_numel += unpartitioned_numel
|
238 |
+
|
239 |
+
state_dict[name] = frozen_param_fragments[name]
|
240 |
+
|
241 |
+
if debug:
|
242 |
+
print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
|
243 |
+
|
244 |
+
print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
|
245 |
+
|
246 |
+
|
247 |
+
def _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
|
248 |
+
param_shapes = zero_model_states[0].param_shapes
|
249 |
+
|
250 |
+
# Reconstruction protocol:
|
251 |
+
#
|
252 |
+
# XXX: document this
|
253 |
+
|
254 |
+
if debug:
|
255 |
+
for i in range(world_size):
|
256 |
+
for j in range(len(fp32_flat_groups[0])):
|
257 |
+
print(f"{FP32_FLAT_GROUPS}[{i}][{j}].shape={fp32_flat_groups[i][j].shape}")
|
258 |
+
|
259 |
+
# XXX: memory usage doubles here (zero2)
|
260 |
+
num_param_groups = len(fp32_flat_groups[0])
|
261 |
+
merged_single_partition_of_fp32_groups = []
|
262 |
+
for i in range(num_param_groups):
|
263 |
+
merged_partitions = [sd[i] for sd in fp32_flat_groups]
|
264 |
+
full_single_fp32_vector = torch.cat(merged_partitions, 0)
|
265 |
+
merged_single_partition_of_fp32_groups.append(full_single_fp32_vector)
|
266 |
+
avail_numel = sum(
|
267 |
+
[full_single_fp32_vector.numel() for full_single_fp32_vector in merged_single_partition_of_fp32_groups])
|
268 |
+
|
269 |
+
if debug:
|
270 |
+
wanted_params = sum([len(shapes) for shapes in param_shapes])
|
271 |
+
wanted_numel = sum([sum(shape.numel() for shape in shapes.values()) for shapes in param_shapes])
|
272 |
+
# not asserting if there is a mismatch due to possible padding
|
273 |
+
print(f"Have {avail_numel} numels to process.")
|
274 |
+
print(f"Need {wanted_numel} numels in {wanted_params} params.")
|
275 |
+
|
276 |
+
# params
|
277 |
+
# XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
|
278 |
+
# out-of-core computing solution
|
279 |
+
total_numel = 0
|
280 |
+
total_params = 0
|
281 |
+
for shapes, full_single_fp32_vector in zip(param_shapes, merged_single_partition_of_fp32_groups):
|
282 |
+
offset = 0
|
283 |
+
avail_numel = full_single_fp32_vector.numel()
|
284 |
+
for name, shape in shapes.items():
|
285 |
+
|
286 |
+
unpartitioned_numel = shape.numel()
|
287 |
+
total_numel += unpartitioned_numel
|
288 |
+
total_params += 1
|
289 |
+
|
290 |
+
if debug:
|
291 |
+
print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
|
292 |
+
state_dict[name] = full_single_fp32_vector.narrow(0, offset, unpartitioned_numel).view(shape)
|
293 |
+
offset += unpartitioned_numel
|
294 |
+
|
295 |
+
# Z2 started to align to 2*world_size to improve nccl performance. Therefore both offset and
|
296 |
+
# avail_numel can differ by anywhere between 0..2*world_size. Due to two unrelated complex
|
297 |
+
# paddings performed in the code it's almost impossible to predict the exact numbers w/o the
|
298 |
+
# live optimizer object, so we are checking that the numbers are within the right range
|
299 |
+
align_to = 2 * world_size
|
300 |
+
|
301 |
+
def zero2_align(x):
|
302 |
+
return align_to * math.ceil(x / align_to)
|
303 |
+
|
304 |
+
if debug:
|
305 |
+
print(f"original offset={offset}, avail_numel={avail_numel}")
|
306 |
+
|
307 |
+
offset = zero2_align(offset)
|
308 |
+
avail_numel = zero2_align(avail_numel)
|
309 |
+
|
310 |
+
if debug:
|
311 |
+
print(f"aligned offset={offset}, avail_numel={avail_numel}")
|
312 |
+
|
313 |
+
# Sanity check
|
314 |
+
if offset != avail_numel:
|
315 |
+
raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
|
316 |
+
|
317 |
+
print(f"Reconstructed fp32 state dict with {total_params} params {total_numel} elements")
|
318 |
+
|
319 |
+
|
320 |
+
def _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states):
|
321 |
+
state_dict = OrderedDict()
|
322 |
+
|
323 |
+
# buffers
|
324 |
+
buffers = zero_model_states[0].buffers
|
325 |
+
state_dict.update(buffers)
|
326 |
+
if debug:
|
327 |
+
print(f"added {len(buffers)} buffers")
|
328 |
+
|
329 |
+
_zero2_merge_frozen_params(state_dict, zero_model_states)
|
330 |
+
|
331 |
+
_zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
|
332 |
+
|
333 |
+
# recover shared parameters
|
334 |
+
for pair in zero_model_states[0].shared_params:
|
335 |
+
if pair[1] in state_dict:
|
336 |
+
state_dict[pair[0]] = state_dict[pair[1]]
|
337 |
+
|
338 |
+
return state_dict
|
339 |
+
|
340 |
+
|
341 |
+
def zero3_partitioned_param_info(unpartitioned_numel, world_size):
|
342 |
+
remainder = unpartitioned_numel % world_size
|
343 |
+
padding_numel = (world_size - remainder) if remainder else 0
|
344 |
+
partitioned_numel = math.ceil(unpartitioned_numel / world_size)
|
345 |
+
return partitioned_numel, padding_numel
|
346 |
+
|
347 |
+
|
348 |
+
def _zero3_merge_frozen_params(state_dict, world_size, zero_model_states):
|
349 |
+
if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
|
350 |
+
return
|
351 |
+
|
352 |
+
if debug:
|
353 |
+
for i in range(world_size):
|
354 |
+
num_elem = sum(s.numel() for s in zero_model_states[i].frozen_param_fragments.values())
|
355 |
+
print(f'rank {i}: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
|
356 |
+
|
357 |
+
frozen_param_shapes = zero_model_states[0].frozen_param_shapes
|
358 |
+
wanted_params = len(frozen_param_shapes)
|
359 |
+
wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
|
360 |
+
avail_numel = sum([p.numel() for p in zero_model_states[0].frozen_param_fragments.values()]) * world_size
|
361 |
+
print(f'Frozen params: Have {avail_numel} numels to process.')
|
362 |
+
print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
|
363 |
+
|
364 |
+
total_params = 0
|
365 |
+
total_numel = 0
|
366 |
+
for name, shape in zero_model_states[0].frozen_param_shapes.items():
|
367 |
+
total_params += 1
|
368 |
+
unpartitioned_numel = shape.numel()
|
369 |
+
total_numel += unpartitioned_numel
|
370 |
+
|
371 |
+
param_frags = tuple(model_state.frozen_param_fragments[name] for model_state in zero_model_states)
|
372 |
+
state_dict[name] = torch.cat(param_frags, 0).narrow(0, 0, unpartitioned_numel).view(shape)
|
373 |
+
|
374 |
+
partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
|
375 |
+
|
376 |
+
if debug:
|
377 |
+
print(
|
378 |
+
f"Frozen params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
|
379 |
+
)
|
380 |
+
|
381 |
+
print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
|
382 |
+
|
383 |
+
|
384 |
+
def _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
|
385 |
+
param_shapes = zero_model_states[0].param_shapes
|
386 |
+
avail_numel = fp32_flat_groups[0].numel() * world_size
|
387 |
+
# Reconstruction protocol: For zero3 we need to zip the partitions together at boundary of each
|
388 |
+
# param, re-consolidating each param, while dealing with padding if any
|
389 |
+
|
390 |
+
# merge list of dicts, preserving order
|
391 |
+
param_shapes = {k: v for d in param_shapes for k, v in d.items()}
|
392 |
+
|
393 |
+
if debug:
|
394 |
+
for i in range(world_size):
|
395 |
+
print(f"{FP32_FLAT_GROUPS}[{i}].shape={fp32_flat_groups[i].shape}")
|
396 |
+
|
397 |
+
wanted_params = len(param_shapes)
|
398 |
+
wanted_numel = sum(shape.numel() for shape in param_shapes.values())
|
399 |
+
# not asserting if there is a mismatch due to possible padding
|
400 |
+
avail_numel = fp32_flat_groups[0].numel() * world_size
|
401 |
+
print(f"Trainable params: Have {avail_numel} numels to process.")
|
402 |
+
print(f"Trainable params: Need {wanted_numel} numels in {wanted_params} params.")
|
403 |
+
|
404 |
+
# params
|
405 |
+
# XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
|
406 |
+
# out-of-core computing solution
|
407 |
+
offset = 0
|
408 |
+
total_numel = 0
|
409 |
+
total_params = 0
|
410 |
+
for name, shape in param_shapes.items():
|
411 |
+
|
412 |
+
unpartitioned_numel = shape.numel()
|
413 |
+
total_numel += unpartitioned_numel
|
414 |
+
total_params += 1
|
415 |
+
|
416 |
+
partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
|
417 |
+
|
418 |
+
if debug:
|
419 |
+
print(
|
420 |
+
f"Trainable params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
|
421 |
+
)
|
422 |
+
|
423 |
+
# XXX: memory usage doubles here
|
424 |
+
state_dict[name] = torch.cat(
|
425 |
+
tuple(fp32_flat_groups[i].narrow(0, offset, partitioned_numel) for i in range(world_size)),
|
426 |
+
0).narrow(0, 0, unpartitioned_numel).view(shape)
|
427 |
+
offset += partitioned_numel
|
428 |
+
|
429 |
+
offset *= world_size
|
430 |
+
|
431 |
+
# Sanity check
|
432 |
+
if offset != avail_numel:
|
433 |
+
raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
|
434 |
+
|
435 |
+
print(f"Reconstructed Trainable fp32 state dict with {total_params} params {total_numel} elements")
|
436 |
+
|
437 |
+
|
438 |
+
def _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states):
|
439 |
+
state_dict = OrderedDict()
|
440 |
+
|
441 |
+
# buffers
|
442 |
+
buffers = zero_model_states[0].buffers
|
443 |
+
state_dict.update(buffers)
|
444 |
+
if debug:
|
445 |
+
print(f"added {len(buffers)} buffers")
|
446 |
+
|
447 |
+
_zero3_merge_frozen_params(state_dict, world_size, zero_model_states)
|
448 |
+
|
449 |
+
_zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
|
450 |
+
|
451 |
+
# recover shared parameters
|
452 |
+
for pair in zero_model_states[0].shared_params:
|
453 |
+
if pair[1] in state_dict:
|
454 |
+
state_dict[pair[0]] = state_dict[pair[1]]
|
455 |
+
|
456 |
+
return state_dict
|
457 |
+
|
458 |
+
|
459 |
+
def get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag=None):
|
460 |
+
"""
|
461 |
+
Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated state_dict that can be loaded with
|
462 |
+
``load_state_dict()`` and used for training without DeepSpeed or shared with others, for example
|
463 |
+
via a model hub.
|
464 |
+
|
465 |
+
Args:
|
466 |
+
- ``checkpoint_dir``: path to the desired checkpoint folder
|
467 |
+
- ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in 'latest' file. e.g., ``global_step14``
|
468 |
+
|
469 |
+
Returns:
|
470 |
+
- pytorch ``state_dict``
|
471 |
+
|
472 |
+
Note: this approach may not work if your application doesn't have sufficient free CPU memory and
|
473 |
+
you may need to use the offline approach using the ``zero_to_fp32.py`` script that is saved with
|
474 |
+
the checkpoint.
|
475 |
+
|
476 |
+
A typical usage might be ::
|
477 |
+
|
478 |
+
from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
|
479 |
+
# do the training and checkpoint saving
|
480 |
+
state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu
|
481 |
+
model = model.cpu() # move to cpu
|
482 |
+
model.load_state_dict(state_dict)
|
483 |
+
# submit to model hub or save the model to share with others
|
484 |
+
|
485 |
+
In this example the ``model`` will no longer be usable in the deepspeed context of the same
|
486 |
+
application. i.e. you will need to re-initialize the deepspeed engine, since
|
487 |
+
``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
|
488 |
+
|
489 |
+
If you want it all done for you, use ``load_state_dict_from_zero_checkpoint`` instead.
|
490 |
+
|
491 |
+
"""
|
492 |
+
if tag is None:
|
493 |
+
latest_path = os.path.join(checkpoint_dir, 'latest')
|
494 |
+
if os.path.isfile(latest_path):
|
495 |
+
with open(latest_path, 'r') as fd:
|
496 |
+
tag = fd.read().strip()
|
497 |
+
else:
|
498 |
+
raise ValueError(f"Unable to find 'latest' file at {latest_path}")
|
499 |
+
|
500 |
+
ds_checkpoint_dir = os.path.join(checkpoint_dir, tag)
|
501 |
+
|
502 |
+
if not os.path.isdir(ds_checkpoint_dir):
|
503 |
+
raise FileNotFoundError(f"Directory '{ds_checkpoint_dir}' doesn't exist")
|
504 |
+
|
505 |
+
return _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir)
|
506 |
+
|
507 |
+
|
508 |
+
def convert_zero_checkpoint_to_fp32_state_dict(checkpoint_dir, output_file, tag=None):
|
509 |
+
"""
|
510 |
+
Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict`` file that can be
|
511 |
+
loaded with ``torch.load(file)`` + ``load_state_dict()`` and used for training without DeepSpeed.
|
512 |
+
|
513 |
+
Args:
|
514 |
+
- ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
|
515 |
+
- ``output_file``: path to the pytorch fp32 state_dict output file (e.g. path/pytorch_model.bin)
|
516 |
+
- ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
|
517 |
+
"""
|
518 |
+
|
519 |
+
state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag)
|
520 |
+
print(f"Saving fp32 state dict to {output_file}")
|
521 |
+
torch.save(state_dict, output_file)
|
522 |
+
|
523 |
+
|
524 |
+
def load_state_dict_from_zero_checkpoint(model, checkpoint_dir, tag=None):
|
525 |
+
"""
|
526 |
+
1. Put the provided model to cpu
|
527 |
+
2. Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict``
|
528 |
+
3. Load it into the provided model
|
529 |
+
|
530 |
+
Args:
|
531 |
+
- ``model``: the model object to update
|
532 |
+
- ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
|
533 |
+
- ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
|
534 |
+
|
535 |
+
Returns:
|
536 |
+
- ``model`: modified model
|
537 |
+
|
538 |
+
Make sure you have plenty of CPU memory available before you call this function. If you don't
|
539 |
+
have enough use the ``zero_to_fp32.py`` utility to do the conversion. You will find it
|
540 |
+
conveniently placed for you in the checkpoint folder.
|
541 |
+
|
542 |
+
A typical usage might be ::
|
543 |
+
|
544 |
+
from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
|
545 |
+
model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
|
546 |
+
# submit to model hub or save the model to share with others
|
547 |
+
|
548 |
+
Note, that once this was run, the ``model`` will no longer be usable in the deepspeed context
|
549 |
+
of the same application. i.e. you will need to re-initialize the deepspeed engine, since
|
550 |
+
``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
|
551 |
+
|
552 |
+
"""
|
553 |
+
logger.info(f"Extracting fp32 weights")
|
554 |
+
state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag)
|
555 |
+
|
556 |
+
logger.info(f"Overwriting model with fp32 weights")
|
557 |
+
model = model.cpu()
|
558 |
+
model.load_state_dict(state_dict, strict=False)
|
559 |
+
|
560 |
+
return model
|
561 |
+
|
562 |
+
|
563 |
+
if __name__ == "__main__":
|
564 |
+
|
565 |
+
parser = argparse.ArgumentParser()
|
566 |
+
parser.add_argument("checkpoint_dir",
|
567 |
+
type=str,
|
568 |
+
help="path to the desired checkpoint folder, e.g., path/checkpoint-12")
|
569 |
+
parser.add_argument(
|
570 |
+
"output_file",
|
571 |
+
type=str,
|
572 |
+
help="path to the pytorch fp32 state_dict output file (e.g. path/checkpoint-12/pytorch_model.bin)")
|
573 |
+
parser.add_argument("-d", "--debug", action='store_true', help="enable debug")
|
574 |
+
args = parser.parse_args()
|
575 |
+
|
576 |
+
debug = args.debug
|
577 |
+
|
578 |
+
convert_zero_checkpoint_to_fp32_state_dict(args.checkpoint_dir, args.output_file)
|