Zorro123444 commited on
Commit
878525b
1 Parent(s): 65b27de

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: visual-question-answering
3
+ language:
4
+ - en
5
+ - zh
6
+ datasets:
7
+ - openbmb/RLAIF-V-Dataset
8
+ ---
9
+
10
+
11
+ <h1>A GPT-4V Level Multimodal LLM on Your Phone</h1>
12
+
13
+ [GitHub](https://github.com/OpenBMB/MiniCPM-V) | [Demo](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5) | <a href="https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/wechat.md" target="_blank"> WeChat</a>
14
+
15
+
16
+ ## News <!-- omit in toc -->
17
+
18
+ #### 📌 Pinned
19
+
20
+
21
+ * [2024.08.10] 🚀🚀🚀 MiniCPM-Llama3-V 2.5 is now fully supported by [official](https://github.com/ggerganov/llama.cpp) llama.cpp! GGUF models of various sizes are available [here](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf).
22
+ * [2024.08.06] 🔥🔥🔥 We open-source [MiniCPM-V 2.6](https://huggingface.co/openbmb/MiniCPM-V-2_6), which outperforms GPT-4V on single image, multi-image and video understanding. It advances popular features of MiniCPM-Llama3-V 2.5, and can support real-time video understanding on iPad. Try it now!
23
+ * [2024.08.03] MiniCPM-Llama3-V 2.5 technical report is released! See [here](https://github.com/OpenBMB/MiniCPM-V/tree/main/docs/MiniCPM_Llama3_V_25_technical_report.pdf).
24
+ * [2024.07.19] MiniCPM-Llama3-V 2.5 supports vLLM now! See [here](https://github.com/OpenBMB/MiniCPM-V/tree/main?tab=readme-ov-file#vllm).
25
+ * [2024.05.28] 🚀🚀🚀 MiniCPM-Llama3-V 2.5 now fully supports its feature in llama.cpp and ollama! Please pull the latest code **of our provided forks** ([llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-v2.5/examples/minicpmv/README.md), [ollama](https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5)). GGUF models in various sizes are available [here](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf/tree/main). MiniCPM-Llama3-V 2.5 series is **not supported by the official repositories yet**, and we are working hard to merge PRs. Please stay tuned! You can visit our [GitHub](https://github.com/OpenBMB/MiniCPM-V) repository for more information!
26
+ * [2024.05.28] 💫 We now support LoRA fine-tuning for MiniCPM-Llama3-V 2.5, using only 2 V100 GPUs! See more statistics [here](https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#model-fine-tuning-memory-usage-statistics).
27
+ * [2024.05.23] 🔥🔥🔥 MiniCPM-V tops GitHub Trending and HuggingFace Trending! Our demo, recommended by Hugging Face Gradio’s official account, is available [here](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5). Come and try it out!
28
+
29
+ <br>
30
+
31
+ * [2024.06.03] Now, you can run MiniCPM-Llama3-V 2.5 on multiple low VRAM GPUs(12 GB or 16 GB) by distributing the model's layers across multiple GPUs. For more details, Check this [link](https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/inference_on_multiple_gpus.md).
32
+ * [2024.05.25] MiniCPM-Llama3-V 2.5 now supports streaming outputs and customized system prompts. Try it at [here](#usage)
33
+ * [2024.05.24] We release the [MiniCPM-Llama3-V 2.5 gguf](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf), which supports [llama.cpp](https://github.com/OpenBMB/MiniCPM-V/tree/main?tab=readme-ov-file#inference-with-llamacpp) inference and provides a 6~8 token/s smooth decoding on mobile phones. Try it now!
34
+ * [2024.05.23] 🔍 We've released a comprehensive comparison between Phi-3-vision-128k-instruct and MiniCPM-Llama3-V 2.5, including benchmarks evaluations, multilingual capabilities, and inference efficiency 🌟📊🌍🚀. Click [here](https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/compare_with_phi-3_vision.md) to view more details.
35
+ * [2024.05.20] We open-soure MiniCPM-Llama3-V 2.5, it has improved OCR capability and supports 30+ languages, representing the first end-side MLLM achieving GPT-4V level performance! We provide [efficient inference](#deployment-on-mobile-phone) and [simple fine-tuning](https://github.com/OpenBMB/MiniCPM-V/blob/main/finetune/readme.md). Try it now!
36
+
37
+
38
+ ## Model Summary
39
+
40
+ **MiniCPM-Llama3-V 2.5** is the latest model in the MiniCPM-V series. The model is built on SigLip-400M and Llama3-8B-Instruct with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-V 2.0. Notable features of MiniCPM-Llama3-V 2.5 include:
41
+
42
+ - 🔥 **Leading Performance.**
43
+ MiniCPM-Llama3-V 2.5 has achieved an average score of 65.1 on OpenCompass, a comprehensive evaluation over 11 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4V-1106, Gemini Pro, Claude 3 and Qwen-VL-Max** and greatly outperforms other Llama 3-based MLLMs.
44
+
45
+ - 💪 **Strong OCR Capabilities.**
46
+ MiniCPM-Llama3-V 2.5 can process images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344), achieving an **700+ score on OCRBench, surpassing proprietary models such as GPT-4o, GPT-4V-0409, Qwen-VL-Max and Gemini Pro**. Based on recent user feedback, MiniCPM-Llama3-V 2.5 has now enhanced full-text OCR extraction, table-to-markdown conversion, and other high-utility capabilities, and has further strengthened its instruction-following and complex reasoning abilities, enhancing multimodal interaction experiences.
47
+
48
+ - 🏆 **Trustworthy Behavior.**
49
+ Leveraging the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) method (the newest technology in the [RLHF-V](https://github.com/RLHF-V) [CVPR'24] series), MiniCPM-Llama3-V 2.5 exhibits more trustworthy behavior. It achieves **10.3%** hallucination rate on Object HalBench, lower than GPT-4V-1106 (13.6%), achieving the best-level performance within the open-source community. [Data released](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset).
50
+
51
+ - 🌏 **Multilingual Support.**
52
+ Thanks to the strong multilingual capabilities of Llama 3 and the cross-lingual generalization technique from [VisCPM](https://github.com/OpenBMB/VisCPM), MiniCPM-Llama3-V 2.5 extends its bilingual (Chinese-English) multimodal capabilities to **over 30 languages including German, French, Spanish, Italian, Korean, Japanese etc.** [All Supported Languages](./assets/minicpm-llama-v-2-5_languages.md).
53
+
54
+ - 🚀 **Efficient Deployment.**
55
+ MiniCPM-Llama3-V 2.5 systematically employs **model quantization, CPU optimizations, NPU optimizations and compilation optimizations**, achieving high-efficiency deployment on edge devices. For mobile phones with Qualcomm chips, we have integrated the NPU acceleration framework QNN into llama.cpp for the first time. After systematic optimization, MiniCPM-Llama3-V 2.5 has realized a **150-fold acceleration in multimodal large model end-side image encoding** and a **3-fold increase in language decoding speed**.
56
+
57
+ - 💫 **Easy Usage.**
58
+ MiniCPM-Llama3-V 2.5 can be easily used in various ways: (1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-v2.5/examples/minicpmv/README.md) and [ollama](https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5) support for efficient CPU inference on local devices, (2) [GGUF](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf) format quantized models in 16 sizes, (3) efficient [LoRA](https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#lora-finetuning) fine-tuning with only 2 V100 GPUs, (4) [streaming output](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5#usage), (5) quick local WebUI demo setup with [Gradio](https://github.com/OpenBMB/MiniCPM-V/blob/main/web_demo_2.5.py) and [Streamlit](https://github.com/OpenBMB/MiniCPM-V/blob/main/web_demo_streamlit-2_5.py), and (6) interactive demos on [HuggingFace Spaces](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5).
59
+
60
+ ### Evaluation <!-- omit in toc -->
61
+
62
+ Results on TextVQA, DocVQA, OCRBench, OpenCompass MultiModal Avg , MME, MMBench, MMMU, MathVista, LLaVA Bench, RealWorld QA, Object HalBench.
63
+
64
+ <div align="center">
65
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/v2KE3wqQgM05ZW3dH2wbx.png" width="110%" />
66
+ </div>
67
+
68
+
69
+ Evaluation results of multilingual LLaVA Bench
70
+ <div align="center">
71
+ <img src="assets/minicpmv-llama3-v2.5/llavabench_compare.png" width="110%" />
72
+ </div>
73
+
74
+
75
+ ### Examples <!-- omit in toc -->
76
+
77
+ <table align="center">
78
+ <p align="center">
79
+ <img src="assets/minicpmv-llama3-v2.5/cases_all.png" width=95%/>
80
+ </p>
81
+ </table>
82
+
83
+ We deploy MiniCPM-Llama3-V 2.5 on end devices. The demo video is the raw screen recording on a Xiaomi 14 Pro without edition.
84
+
85
+ <table align="center">
86
+ <p align="center">
87
+ <img src="assets/gif_cases/ticket.gif" width=40% style="display:inline-block;"/>
88
+ <img src="assets/gif_cases/meal_plan.gif" width=40% style="display:inline-block;"/>
89
+ </p>
90
+ </table>
91
+
92
+ <table align="center">
93
+ <p align="center">
94
+ <img src="assets/gif_cases/1-4.gif" width=80%/>
95
+ </p>
96
+ </table>
97
+
98
+
99
+
100
+ ## Demo
101
+ Click here to try out the Demo of [MiniCPM-Llama3-V 2.5](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5).
102
+
103
+ ## Deployment on Mobile Phone
104
+ Coming soon.
105
+
106
+ ## Usage
107
+ Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.10:
108
+ ```
109
+ Pillow==10.1.0
110
+ torch==2.1.2
111
+ torchvision==0.16.2
112
+ transformers==4.40.0
113
+ sentencepiece==0.1.99
114
+ ```
115
+
116
+ ```python
117
+ # test.py
118
+ import torch
119
+ from PIL import Image
120
+ from transformers import AutoModel, AutoTokenizer
121
+
122
+ model = AutoModel.from_pretrained('openbmb/MiniCPM-Llama3-V-2_5', trust_remote_code=True, torch_dtype=torch.float16)
123
+ model = model.to(device='cuda')
124
+
125
+ tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-Llama3-V-2_5', trust_remote_code=True)
126
+ model.eval()
127
+
128
+ image = Image.open('xx.jpg').convert('RGB')
129
+ question = 'What is in the image?'
130
+ msgs = [{'role': 'user', 'content': question}]
131
+
132
+ res = model.chat(
133
+ image=image,
134
+ msgs=msgs,
135
+ tokenizer=tokenizer,
136
+ sampling=True, # if sampling=False, beam_search will be used by default
137
+ temperature=0.7,
138
+ # system_prompt='' # pass system_prompt if needed
139
+ )
140
+ print(res)
141
+
142
+ ## if you want to use streaming, please make sure sampling=True and stream=True
143
+ ## the model.chat will return a generator
144
+ res = model.chat(
145
+ image=image,
146
+ msgs=msgs,
147
+ tokenizer=tokenizer,
148
+ sampling=True,
149
+ temperature=0.7,
150
+ stream=True
151
+ )
152
+
153
+ generated_text = ""
154
+ for new_text in res:
155
+ generated_text += new_text
156
+ print(new_text, flush=True, end='')
157
+ ```
158
+
159
+ Please look at [GitHub](https://github.com/OpenBMB/MiniCPM-V) for more detail about usage.
160
+
161
+
162
+ ## Inference with llama.cpp<a id="llamacpp"></a>
163
+ MiniCPM-Llama3-V 2.5 can run with llama.cpp now! See our fork of [llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpm-v2.5/examples/minicpmv) for more detail.
164
+
165
+
166
+ ## Int4 quantized version
167
+ Download the int4 quantized version for lower GPU memory (8GB) usage: [MiniCPM-Llama3-V-2_5-int4](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4).
168
+
169
+ ## MiniCPM-V 2.0 <!-- omit in toc -->
170
+ Please see the info about MiniCPM-V 2.0 [here](https://huggingface.co/openbmb/MiniCPM-V-2).
171
+
172
+ ## License
173
+ #### Model License
174
+ * The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
175
+ * The usage of MiniCPM-V series model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md).
176
+ * The models and weights of MiniCPM are completely free for academic research. after filling out a ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, are also available for free commercial use.
177
+
178
+
179
+
180
+ #### Statement
181
+ * As an LLM, MiniCPM-Llama3-V 2.5 generates contents by learning a large mount of texts, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-Llama3-V 2.5 does not represent the views and positions of the model developers
182
+ * We will not be liable for any problems arising from the use of the MinCPM-V open Source model, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
183
+
184
+ ## Key Techniques and Other Multimodal Projects
185
+
186
+ 👏 Welcome to explore key techniques of MiniCPM-V 2.6 and other multimodal projects of our team:
187
+
188
+ [VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V)
189
+
190
+ ## Citation
191
+
192
+ If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️!
193
+
194
+ ```bib
195
+ @article{yao2024minicpmv,
196
+ title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone},
197
+ author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and Chen, Qianyu and Zhou, Huarong and Zou, Zhensheng and Zhang, Haoye and Hu, Shengding and Zheng, Zhi and Zhou, Jie and Cai, Jie and Han, Xu and Zeng, Guoyang and Li, Dahai and Liu, Zhiyuan and Sun, Maosong},
198
+ journal={arXiv preprint 2408.01800},
199
+ year={2024},
200
+ }
201
+ ```
config.json CHANGED
@@ -1,14 +1,15 @@
1
  {
2
  "_name_or_path": "openbmb/MiniCPM-Llama3-V-2_5",
 
3
  "architectures": [
4
  "MiniCPMV"
5
  ],
6
  "attention_bias": false,
7
  "attention_dropout": 0.0,
8
  "auto_map": {
9
- "AutoConfig": "openbmb/MiniCPM-Llama3-V-2_5--configuration_minicpm.MiniCPMVConfig",
10
- "AutoModel": "openbmb/MiniCPM-Llama3-V-2_5--modeling_minicpmv.MiniCPMV",
11
- "AutoModelForCausalLM": "openbmb/MiniCPM-Llama3-V-2_5--modeling_minicpmv.MiniCPMV"
12
  },
13
  "batch_vision_input": true,
14
  "bos_token_id": 128000,
@@ -20,7 +21,6 @@
20
  "initializer_range": 0.02,
21
  "intermediate_size": 14336,
22
  "max_position_embeddings": 8192,
23
- "mlp_bias": false,
24
  "mm_use_im_start_end": true,
25
  "model_type": "minicpmv",
26
  "num_attention_heads": 32,
@@ -34,14 +34,14 @@
34
  "rope_theta": 500000.0,
35
  "slice_config": {
36
  "max_slice_nums": 9,
 
37
  "model_type": "minicpmv"
38
  },
39
  "slice_mode": true,
40
  "tie_word_embeddings": false,
41
- "torch_dtype": "float32",
42
- "transformers_version": "4.44.2",
43
- "use_cache": true,
44
- "version": "2.5",
45
  "vision_config": {
46
  "hidden_size": 1152,
47
  "image_size": 980,
 
1
  {
2
  "_name_or_path": "openbmb/MiniCPM-Llama3-V-2_5",
3
+ "version": "2.5",
4
  "architectures": [
5
  "MiniCPMV"
6
  ],
7
  "attention_bias": false,
8
  "attention_dropout": 0.0,
9
  "auto_map": {
10
+ "AutoConfig": "configuration_minicpm.MiniCPMVConfig",
11
+ "AutoModel": "modeling_minicpmv.MiniCPMV",
12
+ "AutoModelForCausalLM": "modeling_minicpmv.MiniCPMV"
13
  },
14
  "batch_vision_input": true,
15
  "bos_token_id": 128000,
 
21
  "initializer_range": 0.02,
22
  "intermediate_size": 14336,
23
  "max_position_embeddings": 8192,
 
24
  "mm_use_im_start_end": true,
25
  "model_type": "minicpmv",
26
  "num_attention_heads": 32,
 
34
  "rope_theta": 500000.0,
35
  "slice_config": {
36
  "max_slice_nums": 9,
37
+ "patch_size": 14,
38
  "model_type": "minicpmv"
39
  },
40
  "slice_mode": true,
41
  "tie_word_embeddings": false,
42
+ "torch_dtype": "float16",
43
+ "transformers_version": "4.40.0",
44
+ "use_cache": false,
 
45
  "vision_config": {
46
  "hidden_size": 1152,
47
  "image_size": 980,
configuration.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"framework":"Pytorch","task":"multimodal-dialogue"}
configuration_minicpm.py ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
5
+ # and OPT implementations in this library. It has been modified from its
6
+ # original forms to accommodate minor architectural differences compared
7
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
8
+ #
9
+ # Licensed under the Apache License, Version 2.0 (the "License");
10
+ # you may not use this file except in compliance with the License.
11
+ # You may obtain a copy of the License at
12
+ #
13
+ # http://www.apache.org/licenses/LICENSE-2.0
14
+ #
15
+ # Unless required by applicable law or agreed to in writing, software
16
+ # distributed under the License is distributed on an "AS IS" BASIS,
17
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18
+ # See the License for the specific language governing permissions and
19
+ # limitations under the License.
20
+ """ MiniCPM model configuration"""
21
+ import os
22
+ from typing import Union
23
+
24
+ from transformers.utils import logging
25
+ from transformers import LlamaConfig, PretrainedConfig
26
+ from transformers.models.idefics2.modeling_idefics2 import Idefics2VisionConfig
27
+
28
+ logger = logging.get_logger(__name__)
29
+
30
+
31
+ class MiniCPMVSliceConfig(PretrainedConfig):
32
+ model_type = "minicpmv"
33
+
34
+ def __init__(
35
+ self,
36
+ patch_size=14,
37
+ max_slice_nums=9,
38
+ scale_resolution=448,
39
+ **kwargs,
40
+ ):
41
+ super().__init__(**kwargs)
42
+ self.patch_size = patch_size
43
+ self.max_slice_nums = max_slice_nums
44
+ self.scale_resolution = scale_resolution
45
+
46
+ @classmethod
47
+ def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> "PretrainedConfig":
48
+ cls._set_token_in_kwargs(kwargs)
49
+
50
+ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
51
+
52
+ if config_dict.get("model_type") == "minicpmv":
53
+ config_dict = config_dict["slice_config"]
54
+
55
+ if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type:
56
+ logger.warning(
57
+ f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
58
+ f"{cls.model_type}. This is not supported for all configurations of models and can yield errors."
59
+ )
60
+
61
+ return cls.from_dict(config_dict, **kwargs)
62
+
63
+
64
+
65
+ class MiniCPMVConfig(LlamaConfig):
66
+ model_type = "minicpmv"
67
+ keys_to_ignore_at_inference = ["past_key_values"]
68
+
69
+ default_vision_config = {
70
+ "hidden_size": 1152,
71
+ "image_size": 980,
72
+ "intermediate_size": 4304,
73
+ "model_type": "idefics2",
74
+ "num_attention_heads": 16,
75
+ "num_hidden_layers": 27,
76
+ "patch_size": 14,
77
+ }
78
+
79
+ def __init__(
80
+ self,
81
+ use_cache=True,
82
+ query_num=64,
83
+ image_size=448,
84
+ drop_vision_last_layer=True,
85
+ batch_vision_input=True,
86
+ slice_config=None,
87
+ vision_config=None,
88
+ **kwargs,
89
+ ):
90
+ self.use_cache = use_cache
91
+ self.query_num = query_num
92
+ self.image_size = image_size
93
+ self.drop_vision_last_layer = drop_vision_last_layer
94
+ self.batch_vision_input = batch_vision_input
95
+
96
+ if slice_config is None:
97
+ self.slice_config = MiniCPMVSliceConfig(max_slice_nums=1)
98
+ else:
99
+ self.slice_config = MiniCPMVSliceConfig(**slice_config)
100
+ self.slice_mode = True
101
+
102
+ # same as HuggingFaceM4/siglip-so400m-14-980-flash-attn2-navit
103
+ if vision_config is None:
104
+ self.vision_config = Idefics2VisionConfig(**self.default_vision_config)
105
+ logger.info("vision_config is None, using default vision config")
106
+ elif isinstance(vision_config, dict):
107
+ self.vision_config = Idefics2VisionConfig(**vision_config)
108
+ elif isinstance(vision_config, Idefics2VisionConfig):
109
+ self.vision_config = vision_config
110
+
111
+ self.patch_size = self.vision_config.patch_size
112
+
113
+ super().__init__(**kwargs)
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "_from_model_config": true,
3
  "bos_token_id": 128000,
4
  "eos_token_id": 128001,
5
- "transformers_version": "4.44.2"
6
  }
 
2
  "_from_model_config": true,
3
  "bos_token_id": 128000,
4
  "eos_token_id": 128001,
5
+ "transformers_version": "4.40.0"
6
  }
gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.png filter=lfs diff=lfs merge=lfs -text
37
+ *.gif filter=lfs diff=lfs merge=lfs -text
image_processing_minicpmv.py ADDED
@@ -0,0 +1,402 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional, Union, Dict, Any
2
+
3
+ import torch
4
+ import math
5
+ import PIL.Image
6
+ import PIL.ImageSequence
7
+ import numpy as np
8
+ import PIL
9
+ from PIL import Image
10
+
11
+ from transformers.utils import TensorType, requires_backends, is_torch_dtype, is_torch_device
12
+ from transformers.image_processing_utils import BaseImageProcessor, BatchFeature
13
+ from transformers import AutoImageProcessor
14
+ from transformers.image_transforms import to_channel_dimension_format
15
+ from transformers.image_utils import (
16
+ ImageInput,
17
+ make_list_of_images,
18
+ valid_images,
19
+ is_torch_tensor,
20
+ to_numpy_array,
21
+ infer_channel_dimension_format,
22
+ ChannelDimension
23
+ )
24
+
25
+
26
+ def recursive_converter(converter, value):
27
+ if isinstance(value, list):
28
+ new_value = []
29
+ for v in value:
30
+ new_value += [recursive_converter(converter, v)]
31
+ return new_value
32
+ else:
33
+ return converter(value)
34
+
35
+
36
+ class MiniCPMVBatchFeature(BatchFeature):
37
+ r"""
38
+ Extend from BatchFeature for supporting various image size
39
+ """
40
+ def __init__(self, data: Optional[Dict[str, Any]] = None, tensor_type: Union[None, str, TensorType] = None):
41
+ super().__init__(data)
42
+ self.convert_to_tensors(tensor_type=tensor_type)
43
+
44
+ def convert_to_tensors(self, tensor_type: Optional[Union[str, TensorType]] = None):
45
+ if tensor_type is None:
46
+ return self
47
+
48
+ is_tensor, as_tensor = self._get_is_as_tensor_fns(tensor_type)
49
+
50
+ def converter(value):
51
+ try:
52
+ if not is_tensor(value):
53
+ tensor = as_tensor(value)
54
+ return tensor
55
+ except: # noqa E722
56
+ if key == "overflowing_values":
57
+ raise ValueError("Unable to create tensor returning overflowing values of different lengths. ")
58
+ raise ValueError(
59
+ "Unable to create tensor, you should probably activate padding "
60
+ "with 'padding=True' to have batched tensors with the same length."
61
+ )
62
+
63
+
64
+ for key, value in self.items():
65
+ self[key] = recursive_converter(converter, value)
66
+ return self
67
+
68
+ def to(self, *args, **kwargs) -> "MiniCPMVBatchFeature":
69
+ requires_backends(self, ["torch"])
70
+ import torch
71
+
72
+ def cast_tensor(v):
73
+ # check if v is a floating point
74
+ if torch.is_floating_point(v):
75
+ # cast and send to device
76
+ return v.to(*args, **kwargs)
77
+ elif device is not None:
78
+ return v.to(device=device)
79
+ else:
80
+ return v
81
+
82
+ new_data = {}
83
+ device = kwargs.get("device")
84
+ # Check if the args are a device or a dtype
85
+ if device is None and len(args) > 0:
86
+ # device should be always the first argument
87
+ arg = args[0]
88
+ if is_torch_dtype(arg):
89
+ # The first argument is a dtype
90
+ pass
91
+ elif isinstance(arg, str) or is_torch_device(arg) or isinstance(arg, int):
92
+ device = arg
93
+ else:
94
+ # it's something else
95
+ raise ValueError(f"Attempting to cast a BatchFeature to type {str(arg)}. This is not supported.")
96
+ # We cast only floating point tensors to avoid issues with tokenizers casting `LongTensor` to `FloatTensor`
97
+ for k, v in self.items():
98
+ new_data[k] = recursive_converter(cast_tensor, v)
99
+ self.data = new_data
100
+ return self
101
+
102
+
103
+ class MiniCPMVImageProcessor(BaseImageProcessor):
104
+ model_input_names = ["pixel_values"]
105
+
106
+ def __init__(
107
+ self,
108
+ max_slice_nums=9,
109
+ scale_resolution=448,
110
+ patch_size=14,
111
+ **kwargs):
112
+ super().__init__(**kwargs)
113
+ self.max_slice_nums = max_slice_nums
114
+ self.scale_resolution = scale_resolution
115
+ self.patch_size = patch_size
116
+ self.image_feature_size = kwargs.pop("image_feature_size", 64)
117
+ self.im_start_token = kwargs.pop("im_start", "")
119
+ self.slice_start_token = kwargs.pop("slice_start", "<slice>")
120
+ self.slice_end_token = kwargs.pop("slice_end", "</slice>")
121
+ self.unk_token = kwargs.pop("unk", "<unk>")
122
+ self.mean = np.array(kwargs.pop("norm_mean", [0.5, 0.5, 0.5]))
123
+ self.std = np.array(kwargs.pop("norm_std", [0.5, 0.5, 0.5]))
124
+ self.version = kwargs.pop("version", 2.0)
125
+
126
+ def ensure_divide(self, length, patch_size):
127
+ return max(round(length / patch_size) * patch_size, patch_size)
128
+
129
+ def find_best_resize(self,
130
+ original_size,
131
+ scale_resolution,
132
+ patch_size,
133
+ allow_upscale=False):
134
+ width, height = original_size
135
+ if (width * height >
136
+ scale_resolution * scale_resolution) or allow_upscale:
137
+ r = width / height
138
+ height = int(scale_resolution / math.sqrt(r))
139
+ width = int(height * r)
140
+ best_width = self.ensure_divide(width, patch_size)
141
+ best_height = self.ensure_divide(height, patch_size)
142
+ return (best_width, best_height)
143
+
144
+ def get_refine_size(self,
145
+ original_size,
146
+ grid,
147
+ scale_resolution,
148
+ patch_size,
149
+ allow_upscale=False):
150
+ width, height = original_size
151
+ grid_x, grid_y = grid
152
+
153
+ refine_width = self.ensure_divide(width, grid_x)
154
+ refine_height = self.ensure_divide(height, grid_y)
155
+
156
+ grid_width = refine_width / grid_x
157
+ grid_height = refine_height / grid_y
158
+
159
+ best_grid_size = self.find_best_resize((grid_width, grid_height),
160
+ scale_resolution,
161
+ patch_size,
162
+ allow_upscale=allow_upscale)
163
+ refine_size = (best_grid_size[0] * grid_x, best_grid_size[1] * grid_y)
164
+ return refine_size
165
+
166
+ def split_to_patches(self, image, grid):
167
+ patches = []
168
+ width, height = image.size
169
+ grid_x = int(width / grid[0])
170
+ grid_y = int(height / grid[1])
171
+ for i in range(0, height, grid_y):
172
+ images = []
173
+ for j in range(0, width, grid_x):
174
+ box = (j, i, j + grid_x, i + grid_y)
175
+ patch = image.crop(box)
176
+ images.append(patch)
177
+ patches.append(images)
178
+ return patches
179
+
180
+ def slice_image(
181
+ self, image, max_slice_nums=9, scale_resolution=448, patch_size=14, never_split=False
182
+ ):
183
+ original_size = image.size
184
+ original_width, original_height = original_size
185
+ log_ratio = math.log(original_width / original_height)
186
+ ratio = original_width * original_height / (scale_resolution * scale_resolution)
187
+ multiple = min(math.ceil(ratio), max_slice_nums)
188
+
189
+ source_image = None
190
+ best_grid = None
191
+ patches = []
192
+
193
+ if multiple <= 1 or never_split:
194
+ # dont need to slice, upsample
195
+ best_size = self.find_best_resize(
196
+ original_size, scale_resolution, patch_size, allow_upscale=True
197
+ )
198
+ source_image = image.resize(best_size, resample=Image.Resampling.BICUBIC)
199
+ else:
200
+ candidate_split_grids_nums = []
201
+ for i in [multiple - 1, multiple, multiple + 1]:
202
+ if i == 1 or i > max_slice_nums:
203
+ continue
204
+ candidate_split_grids_nums.append(i)
205
+
206
+ # source image, down-sampling and ensure divided by patch_size
207
+ best_resize = self.find_best_resize(original_size, scale_resolution, patch_size)
208
+ source_image = image.copy().resize(best_resize, resample=Image.Resampling.BICUBIC)
209
+ candidate_grids = []
210
+
211
+ # find best grid
212
+ for split_grids_nums in candidate_split_grids_nums:
213
+ m = 1
214
+ while m <= split_grids_nums:
215
+ if split_grids_nums % m == 0:
216
+ candidate_grids.append([m, split_grids_nums // m])
217
+ m += 1
218
+
219
+ best_grid = [1, 1]
220
+ min_error = float("inf")
221
+ for grid in candidate_grids:
222
+ error = abs(log_ratio - math.log(grid[0] / grid[1]))
223
+ if error < min_error:
224
+ best_grid = grid
225
+ min_error = error
226
+
227
+ refine_size = self.get_refine_size(
228
+ original_size, best_grid, scale_resolution, patch_size, allow_upscale=True
229
+ )
230
+
231
+ refine_image = image.resize(refine_size, resample=Image.Resampling.BICUBIC)
232
+ patches = self.split_to_patches(refine_image, best_grid)
233
+
234
+ return source_image, patches, best_grid
235
+
236
+ def get_grid_placeholder(self, grid):
237
+ if grid is None:
238
+ return ""
239
+ image_placeholder = (
240
+ self.im_start_token
241
+ + self.unk_token * self.image_feature_size
242
+ + self.im_end_token
243
+ )
244
+
245
+ cols = grid[0]
246
+ rows = grid[1]
247
+ slices = []
248
+ for i in range(rows):
249
+ lines = []
250
+ for j in range(cols):
251
+ lines.append(image_placeholder)
252
+ slices.append("".join(lines))
253
+
254
+ slice_placeholder = self.slice_start_token + "\n".join(slices) + self.slice_end_token
255
+ return slice_placeholder
256
+
257
+ def get_sliced_images(self, image):
258
+ slice_images = []
259
+
260
+ source_image, patches, sliced_grid = self.slice_image(
261
+ image,
262
+ self.max_slice_nums, # default: 9
263
+ self.scale_resolution, # default: 448
264
+ self.patch_size # default: 14
265
+ )
266
+ slice_images.append(source_image)
267
+
268
+ if len(patches) > 0:
269
+ for i in range(len(patches)):
270
+ for j in range(len(patches[0])):
271
+ slice_images.append(patches[i][j])
272
+ return slice_images
273
+
274
+ def get_sliced_grid(self, image_size):
275
+ original_width, original_height = image_size
276
+ log_ratio = math.log(original_width / original_height)
277
+ ratio = original_width * original_height / (self.scale_resolution * self.scale_resolution)
278
+ multiple = min(math.ceil(ratio), self.max_slice_nums)
279
+ if multiple <= 1:
280
+ return None
281
+ candidate_split_grids_nums = []
282
+ for i in [multiple - 1, multiple, multiple + 1]:
283
+ if i == 1 or i > self.max_slice_nums:
284
+ continue
285
+ candidate_split_grids_nums.append(i)
286
+
287
+ candidate_grids = []
288
+ for split_grids_nums in candidate_split_grids_nums:
289
+ m = 1
290
+ while m <= split_grids_nums:
291
+ if split_grids_nums % m == 0:
292
+ candidate_grids.append([m, split_grids_nums // m])
293
+ m += 1
294
+
295
+ best_grid = [1, 1]
296
+ min_error = float("inf")
297
+ for grid in candidate_grids:
298
+ error = abs(log_ratio - math.log(grid[0] / grid[1]))
299
+ if error < min_error:
300
+ best_grid = grid
301
+ min_error = error
302
+
303
+ return best_grid
304
+
305
+ def get_slice_image_placeholder(self, image_size):
306
+ grid = self.get_sliced_grid(image_size=image_size)
307
+ return (
308
+ self.im_start_token
309
+ + self.unk_token * self.image_feature_size
310
+ + self.im_end_token
311
+ ) + self.get_grid_placeholder(grid=grid)
312
+
313
+ def to_pil_image(self, image, rescale=None) -> PIL.Image.Image:
314
+ """
315
+ Converts `image` to a PIL Image. Optionally rescales it and puts the channel dimension back as the last axis if
316
+ needed.
317
+
318
+ Args:
319
+ image (`PIL.Image.Image` or `numpy.ndarray` or `torch.Tensor`):
320
+ The image to convert to the PIL Image format.
321
+ rescale (`bool`, *optional*):
322
+ Whether or not to apply the scaling factor (to make pixel values integers between 0 and 255). Will
323
+ default to `True` if the image type is a floating type, `False` otherwise.
324
+ """
325
+ if isinstance(image, PIL.Image.Image):
326
+ return image
327
+ if is_torch_tensor(image):
328
+ image = image.numpy()
329
+
330
+ if isinstance(image, np.ndarray):
331
+ if rescale is None:
332
+ # rescale default to the array being of floating type.
333
+ rescale = isinstance(image.flat[0], np.floating)
334
+ # If the channel as been moved to first dim, we put it back at the end.
335
+ if image.ndim == 3 and image.shape[0] in [1, 3]:
336
+ image = image.transpose(1, 2, 0)
337
+ if rescale:
338
+ image = image * 255
339
+ image = image.astype(np.uint8)
340
+ return PIL.Image.fromarray(image)
341
+ return image
342
+
343
+ def reshape_by_patch(self, image):
344
+ """
345
+ :param image: shape [3, H, W]
346
+ :param patch_size:
347
+ :return: [3, patch_size, HW/patch_size]
348
+ """
349
+ image = torch.from_numpy(image)
350
+ patch_size = self.patch_size
351
+ patches = torch.nn.functional.unfold(
352
+ image,
353
+ (patch_size, patch_size),
354
+ stride=(patch_size, patch_size)
355
+ )
356
+
357
+ patches = patches.reshape(image.size(0), patch_size, patch_size, -1)
358
+ patches = patches.permute(0, 1, 3, 2).reshape(image.size(0), patch_size, -1)
359
+ return patches.numpy()
360
+
361
+ def preprocess(
362
+ self,
363
+ images: ImageInput,
364
+ do_pad: Optional[bool] = True, # TODO: add pad for MiniCPM-Llama3-V-2_5
365
+ return_tensors: Optional[Union[str, TensorType]] = None
366
+ ) -> MiniCPMVBatchFeature:
367
+ images = make_list_of_images(images)
368
+
369
+ if not valid_images(images):
370
+ raise ValueError(
371
+ "Invalid image type. Must be of type PIL.Image.Image, numpy.ndarray, "
372
+ "torch.Tensor, tf.Tensor or jax.ndarray."
373
+ )
374
+
375
+ images = [self.to_pil_image(image).convert("RGB") for image in images]
376
+ input_data_format = infer_channel_dimension_format(np.array(images[0]))
377
+
378
+ new_images = []
379
+ image_sizes = [image.size for image in images]
380
+ tgt_sizes = []
381
+ for image in images:
382
+ image_patches = self.get_sliced_images(image)
383
+ image_patches = [to_numpy_array(image).astype(np.float32) / 255 for image in image_patches]
384
+ image_patches = [
385
+ self.normalize(image=image, mean=self.mean, std=self.std, input_data_format=input_data_format)
386
+ for image in image_patches
387
+ ]
388
+ image_patches = [
389
+ to_channel_dimension_format(image, ChannelDimension.FIRST, input_channel_dim=input_data_format)
390
+ for image in image_patches
391
+ ]
392
+ for slice_image in image_patches:
393
+ new_images.append(self.reshape_by_patch(slice_image))
394
+ tgt_sizes.append(np.array((slice_image.shape[1] // self.patch_size, slice_image.shape[2] // self.patch_size)))
395
+
396
+ if tgt_sizes:
397
+ tgt_sizes = np.vstack(tgt_sizes)
398
+ return MiniCPMVBatchFeature(
399
+ data={"pixel_values": [new_images], "image_sizes": [image_sizes], "tgt_sizes": [tgt_sizes]}, tensor_type=return_tensors
400
+ )
401
+
402
+ AutoImageProcessor.register("MiniCPMVImageProcessor", MiniCPMVImageProcessor)
modeling_minicpmv.py ADDED
@@ -0,0 +1,364 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ import json
3
+ import torch
4
+ from threading import Thread
5
+ from copy import deepcopy
6
+ from PIL import Image
7
+ from torchvision import transforms
8
+ from transformers import LlamaPreTrainedModel, LlamaForCausalLM, TextIteratorStreamer
9
+ from transformers.models.idefics2.modeling_idefics2 import Idefics2VisionTransformer
10
+ from transformers import AutoProcessor
11
+
12
+ from .configuration_minicpm import MiniCPMVConfig
13
+ from .resampler import Resampler
14
+
15
+ IMAGENET_INCEPTION_MEAN = (0.5, 0.5, 0.5) # timm.data.IMAGENET_INCEPTION_MEAN
16
+ IMAGENET_INCEPTION_STD = (0.5, 0.5, 0.5) # timm.data.IMAGENET_INCEPTION_STD
17
+
18
+ class MiniCPMVPreTrainedModel(LlamaPreTrainedModel):
19
+ config_class = MiniCPMVConfig
20
+
21
+
22
+ class MiniCPMV(MiniCPMVPreTrainedModel):
23
+ def __init__(self, config):
24
+ super().__init__(config)
25
+
26
+ self.llm = LlamaForCausalLM(config)
27
+ self.vpm = self.init_vision_module()
28
+ self.vision_dim = self.vpm.embed_dim
29
+ self.embed_dim = self.llm.config.hidden_size
30
+ self.resampler = self.init_resampler(self.embed_dim, self.vision_dim)
31
+ self.transform = self.init_transform()
32
+
33
+ def init_vision_module(self):
34
+ # same as HuggingFaceM4/siglip-so400m-14-980-flash-attn2-navit
35
+ model = Idefics2VisionTransformer(self.config.vision_config)
36
+ if self.config.drop_vision_last_layer:
37
+ model.encoder.layers = model.encoder.layers[:-1]
38
+
39
+ setattr(model, 'embed_dim', model.embeddings.embed_dim)
40
+ setattr(model, 'patch_size', model.embeddings.patch_size)
41
+
42
+ return model
43
+
44
+ def init_resampler(self, embed_dim, vision_dim):
45
+ return Resampler(
46
+ num_queries=self.config.query_num,
47
+ embed_dim=embed_dim,
48
+ num_heads=embed_dim // 128,
49
+ kv_dim=vision_dim,
50
+ adaptive=True
51
+ )
52
+
53
+ def init_transform(self):
54
+ return transforms.Compose(
55
+ [
56
+ transforms.ToTensor(),
57
+ transforms.Normalize(
58
+ mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD
59
+ ),
60
+ ]
61
+ )
62
+
63
+ def get_input_embeddings(self):
64
+ return self.llm.get_input_embeddings()
65
+
66
+ def set_input_embeddings(self, value):
67
+ self.llm.embed_tokens = value
68
+
69
+ def get_output_embeddings(self):
70
+ return self.llm.lm_head
71
+
72
+ def set_output_embeddings(self, new_embeddings):
73
+ self.llm.lm_head = new_embeddings
74
+
75
+ def set_decoder(self, decoder):
76
+ self.llm = decoder
77
+
78
+ def get_decoder(self):
79
+ return self.llm
80
+
81
+ def get_vllm_embedding(self, data):
82
+ if 'vision_hidden_states' not in data:
83
+ dtype = self.llm.model.embed_tokens.weight.dtype
84
+ device = self.llm.model.embed_tokens.weight.device
85
+ tgt_sizes = data['tgt_sizes']
86
+ pixel_values_list = data['pixel_values']
87
+ vision_hidden_states = []
88
+ all_pixel_values = []
89
+ img_cnt = []
90
+ for pixel_values in pixel_values_list:
91
+ img_cnt.append(len(pixel_values))
92
+ all_pixel_values.extend([i.flatten(end_dim=1).permute(1, 0) for i in pixel_values])
93
+
94
+ # exist image
95
+ if all_pixel_values:
96
+ tgt_sizes = torch.vstack(tgt_sizes).type(torch.int32)
97
+
98
+ if self.config.batch_vision_input:
99
+ max_patches = torch.max(tgt_sizes[:, 0] * tgt_sizes[:, 1])
100
+
101
+ all_pixel_values = torch.nn.utils.rnn.pad_sequence(all_pixel_values, batch_first=True,
102
+ padding_value=0.0)
103
+ B, L, _ = all_pixel_values.shape
104
+ all_pixel_values = all_pixel_values.permute(0, 2, 1).reshape(B, 3, -1, L)
105
+
106
+ patch_attn_mask = torch.zeros((B, 1, max_patches), dtype=torch.bool, device=device)
107
+ for i in range(B):
108
+ patch_attn_mask[i, :tgt_sizes[i][0] * tgt_sizes[i][1]] = True
109
+
110
+ vision_embedding = self.vpm(all_pixel_values.type(dtype), patch_attention_mask=patch_attn_mask).last_hidden_state
111
+ vision_embedding = self.resampler(vision_embedding, tgt_sizes)
112
+ else:
113
+ # get vision_embedding foreach
114
+ vision_embedding = []
115
+ for single_tgt_size, single_pixel_values in zip(tgt_sizes, all_pixel_values):
116
+ single_pixel_values = single_pixel_values.unsqueeze(0)
117
+ B, L, _ = single_pixel_values.shape
118
+ single_pixel_values = single_pixel_values.permute(0, 2, 1).reshape(B, 3, -1, L)
119
+ single_vision_embedding = self.vpm(single_pixel_values.type(dtype)).last_hidden_state
120
+ single_vision_embedding = self.resampler(single_vision_embedding, single_tgt_size.unsqueeze(0))
121
+ vision_embedding.append(single_vision_embedding)
122
+ vision_embedding = torch.vstack(vision_embedding)
123
+
124
+ start = 0
125
+ for pixel_values in pixel_values_list:
126
+ img_cnt = len(pixel_values)
127
+ if img_cnt > 0:
128
+ vision_hidden_states.append(vision_embedding[start: start + img_cnt])
129
+ start += img_cnt
130
+ else:
131
+ vision_hidden_states.append([])
132
+ else: # no image
133
+ if self.training:
134
+ dummy_image = torch.zeros(
135
+ (1, 3, 224, 224),
136
+ device=device, dtype=dtype
137
+ )
138
+ tgt_sizes = torch.Tensor([[(224 // self.config.patch_size), math.ceil(224 / self.config.patch_size)]]).type(torch.int32)
139
+ dummy_feature = self.resampler(self.vpm(dummy_image).last_hidden_state, tgt_sizes)
140
+ else:
141
+ dummy_feature = []
142
+ for _ in range(len(pixel_values_list)):
143
+ vision_hidden_states.append(dummy_feature)
144
+
145
+ else:
146
+ vision_hidden_states = data['vision_hidden_states']
147
+
148
+ if hasattr(self.llm.config, 'scale_emb'):
149
+ vllm_embedding = self.llm.model.embed_tokens(data['input_ids']) * self.llm.config.scale_emb
150
+ else:
151
+ vllm_embedding = self.llm.model.embed_tokens(data['input_ids'])
152
+
153
+ vision_hidden_states = [i.type(vllm_embedding.dtype) if isinstance(
154
+ i, torch.Tensor) else i for i in vision_hidden_states]
155
+
156
+ bs = len(data['input_ids'])
157
+ for i in range(bs):
158
+ cur_vs_hs = vision_hidden_states[i]
159
+ if len(cur_vs_hs) > 0:
160
+ cur_vllm_emb = vllm_embedding[i]
161
+ cur_image_bound = data['image_bound'][i]
162
+ if len(cur_image_bound) > 0:
163
+ image_indices = torch.stack(
164
+ [torch.arange(r[0], r[1], dtype=torch.long) for r in cur_image_bound]
165
+ ).to(vllm_embedding.device)
166
+
167
+ cur_vllm_emb.scatter_(0, image_indices.view(-1, 1).repeat(1, cur_vllm_emb.shape[-1]),
168
+ cur_vs_hs.view(-1, cur_vs_hs.shape[-1]))
169
+ elif self.training:
170
+ cur_vllm_emb += cur_vs_hs[0].mean() * 0
171
+
172
+ return vllm_embedding, vision_hidden_states
173
+
174
+ def forward(self, data, **kwargs):
175
+ vllm_embedding, vision_hidden_states = self.get_vllm_embedding(data)
176
+ position_ids = data["position_ids"]
177
+ if position_ids.dtype != torch.int64:
178
+ position_ids = position_ids.long()
179
+
180
+ return self.llm(
181
+ input_ids=None,
182
+ position_ids=position_ids,
183
+ inputs_embeds=vllm_embedding,
184
+ **kwargs
185
+ )
186
+
187
+ def _decode_text(self, result_ids, tokenizer):
188
+ result_text = []
189
+ for result in result_ids:
190
+ result = result[result != 0]
191
+ if result[0] == tokenizer.bos_id:
192
+ result = result[1:]
193
+ if result[-1] == tokenizer.eos_id or result[-1] == tokenizer.eot_id:
194
+ result = result[:-1]
195
+ result_text.append(tokenizer.decode(result).strip())
196
+ return result_text
197
+
198
+ def _decode(self, inputs_embeds, tokenizer, decode_text=False, **kwargs):
199
+ terminators = [
200
+ tokenizer.eos_token_id,
201
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
202
+ ]
203
+ output = self.llm.generate(
204
+ inputs_embeds=inputs_embeds,
205
+ pad_token_id=0,
206
+ eos_token_id=terminators,
207
+ **kwargs
208
+ )
209
+ if decode_text:
210
+ return self._decode_text(output, tokenizer)
211
+ return output
212
+
213
+ def _decode_stream(self, inputs_embeds, tokenizer, **kwargs):
214
+ terminators = [
215
+ tokenizer.eos_token_id,
216
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
217
+ ]
218
+ streamer = TextIteratorStreamer(tokenizer=tokenizer)
219
+ generation_kwargs = {
220
+ 'inputs_embeds': inputs_embeds,
221
+ 'pad_token_id': 0,
222
+ 'eos_token_id': terminators,
223
+ 'streamer': streamer
224
+ }
225
+ generation_kwargs.update(kwargs)
226
+
227
+ thread = Thread(target=self.llm.generate, kwargs=generation_kwargs)
228
+ thread.start()
229
+
230
+ return streamer
231
+
232
+ def generate(
233
+ self,
234
+ model_inputs,
235
+ tokenizer=None,
236
+ vision_hidden_states=None,
237
+ stream=False,
238
+ **kwargs
239
+ ):
240
+ bs = len(model_inputs["input_ids"])
241
+ img_list = model_inputs["pixel_values"]
242
+ tgt_sizes = model_inputs["tgt_sizes"]
243
+ if img_list is None:
244
+ img_list = [[] for i in range(bs)]
245
+ assert bs == len(img_list)
246
+ if vision_hidden_states is None:
247
+ pixel_values = []
248
+ for i in range(bs):
249
+ img_inps = []
250
+ for img in img_list[i]:
251
+ img_inps.append(img.to(self.device))
252
+ if img_inps:
253
+ pixel_values.append(img_inps)
254
+ else:
255
+ pixel_values.append([])
256
+ model_inputs["pixel_values"] = pixel_values
257
+ model_inputs['tgt_sizes'] = tgt_sizes
258
+ else:
259
+ model_inputs["vision_hidden_states"] = vision_hidden_states
260
+
261
+ (
262
+ input_embeds,
263
+ vision_hidden_states,
264
+ ) = self.get_vllm_embedding(model_inputs)
265
+
266
+ # output_ids = self._decode(input_embeds, tokenizer, **kwargs)
267
+ if stream:
268
+ kwargs.pop("decode_text")
269
+ result = self._decode_stream(input_embeds, tokenizer, **kwargs)
270
+ else:
271
+ result = self._decode(input_embeds, tokenizer, **kwargs)
272
+
273
+ return result
274
+
275
+ def chat(
276
+ self,
277
+ image,
278
+ msgs,
279
+ tokenizer,
280
+ processor=None,
281
+ vision_hidden_states=None,
282
+ max_new_tokens=1024,
283
+ sampling=True,
284
+ max_inp_length=2048,
285
+ system_prompt='',
286
+ stream=False,
287
+ **kwargs
288
+ ):
289
+ if processor is None:
290
+ processor = AutoProcessor.from_pretrained(self.config._name_or_path, trust_remote_code=True)
291
+ if isinstance(msgs, str):
292
+ msgs = json.loads(msgs)
293
+ copy_msgs = deepcopy(msgs)
294
+
295
+ assert len(msgs) > 0, "msgs is empty"
296
+ assert sampling or not stream, "if use stream mode, make sure sampling=True"
297
+
298
+ if image is not None and isinstance(copy_msgs[0]["content"], str):
299
+ # copy_msgs[0]['content'] = '()\n' + copy_msgs[0]['content']
300
+ copy_msgs[0]["content"] = [image, copy_msgs[0]["content"]]
301
+
302
+ images = []
303
+ for i, msg in enumerate(copy_msgs):
304
+ role = msg["role"]
305
+ content = msg["content"]
306
+ assert role in ["user", "assistant"]
307
+ if i == 0:
308
+ assert role == "user", "The role of first msg should be user"
309
+ if isinstance(content, str):
310
+ content = [content]
311
+ cur_msgs = []
312
+ for c in content:
313
+ if isinstance(c, Image.Image):
314
+ images.append(c)
315
+ cur_msgs.append("()")
316
+ elif isinstance(c, str):
317
+ cur_msgs.append(c)
318
+ msg["content"] = "\n".join(cur_msgs)
319
+
320
+ if system_prompt:
321
+ sys_msg = {'role': 'system', 'content': system_prompt}
322
+ copy_msgs = [sys_msg] + copy_msgs
323
+
324
+ prompt = processor.tokenizer.apply_chat_template(copy_msgs, tokenize=False, add_generation_prompt=True)
325
+ inputs = processor(prompt, images, return_tensors="pt", max_length=max_inp_length).to(self.device)
326
+
327
+ if sampling:
328
+ generation_config = {
329
+ "top_p": 0.8,
330
+ "top_k": 100,
331
+ "temperature": 0.7,
332
+ "do_sample": True,
333
+ "repetition_penalty": 1.05
334
+ }
335
+ else:
336
+ generation_config = {
337
+ "num_beams": 3,
338
+ "repetition_penalty": 1.2,
339
+ }
340
+
341
+ generation_config.update(
342
+ (k, kwargs[k]) for k in generation_config.keys() & kwargs.keys()
343
+ )
344
+ with torch.inference_mode():
345
+ res = self.generate(
346
+ inputs,
347
+ tokenizer=tokenizer,
348
+ max_new_tokens=max_new_tokens,
349
+ vision_hidden_states=vision_hidden_states,
350
+ stream=stream,
351
+ decode_text=True,
352
+ **generation_config
353
+ )
354
+
355
+ if stream:
356
+ def stream_gen():
357
+ for text in res:
358
+ text = text.replace(tokenizer.eot_token, '').replace(tokenizer.eos_token, '')
359
+ yield text
360
+ return stream_gen()
361
+
362
+ else:
363
+ answer = res[0]
364
+ return answer
preprocessor_config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "image_processor_type": "MiniCPMVImageProcessor",
3
+ "auto_map": {
4
+ "AutoProcessor": "processing_minicpmv.MiniCPMVProcessor",
5
+ "AutoImageProcessor": "image_processing_minicpmv.MiniCPMVImageProcessor"
6
+ },
7
+ "processor_class": "MiniCPMVProcessor",
8
+ "max_slice_nums": 9,
9
+ "scale_resolution": 448,
10
+ "patch_size": 14,
11
+ "image_feature_size": 96,
12
+ "im_start": "",
14
+ "slice_start": "<slice>",
15
+ "slice_end": "</slice>",
16
+ "unk": "<unk>",
17
+ "norm_mean": [0.5, 0.5, 0.5],
18
+ "norm_std": [0.5, 0.5, 0.5],
19
+ "version": 2.5
20
+ }
processing_minicpmv.py ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The HuggingFace Inc. team.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """
16
+ Processor class for MiniCPMV.
17
+ """
18
+
19
+ from typing import List, Optional, Union, Dict, Any
20
+ import torch
21
+ import re
22
+
23
+ from transformers.image_processing_utils import BatchFeature
24
+ from transformers.image_utils import ImageInput
25
+ from transformers.processing_utils import ProcessorMixin
26
+ from transformers.tokenization_utils_base import PaddingStrategy, PreTokenizedInput, TextInput, TruncationStrategy
27
+ from transformers.utils import TensorType, requires_backends, is_torch_dtype, is_torch_device
28
+
29
+ from .image_processing_minicpmv import MiniCPMVBatchFeature
30
+
31
+
32
+ class MiniCPMVProcessor(ProcessorMixin):
33
+ r"""
34
+ Constructs a MiniCPMV processor which wraps a MiniCPMV image processor and a MiniCPMV tokenizer into a single processor.
35
+
36
+ [`MiniCPMVProcessor`] offers all the functionalities of [`MiniCPMVImageProcessor`] and [`LlamaTokenizerWrapper`]. See the
37
+ [`~MiniCPMVProcessor.__call__`] and [`~MiniCPMVProcessor.decode`] for more information.
38
+
39
+ Args:
40
+ image_processor ([`MiniCPMVImageProcessor`], *optional*):
41
+ The image processor is a required input.
42
+ tokenizer ([`LlamaTokenizerWrapper`], *optional*):
43
+ The tokenizer is a required input.
44
+ """
45
+ attributes = ["image_processor", "tokenizer"]
46
+ image_processor_class = "AutoImageProcessor"
47
+ tokenizer_class = "AutoTokenizer"
48
+
49
+ def __init__(self, image_processor=None, tokenizer=None):
50
+ super().__init__(image_processor, tokenizer)
51
+ self.version = image_processor.version
52
+
53
+ def __call__(
54
+ self,
55
+ text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]],
56
+ images: ImageInput = None,
57
+ padding: Union[bool, str, PaddingStrategy] = False,
58
+ truncation: Union[bool, str, TruncationStrategy] = None,
59
+ max_length: Optional[int] = None,
60
+ do_pad: Optional[bool] = True,
61
+ return_tensors: Optional[Union[str, TensorType]] = TensorType.PYTORCH,
62
+ ) -> MiniCPMVBatchFeature:
63
+ """
64
+ Only support for single input for now. Batched input is coming soon.
65
+
66
+ Args:
67
+ text (`str`):
68
+ The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
69
+ (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
70
+ `is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
71
+ images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
72
+ The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
73
+ tensor. Both channels-first and channels-last formats are supported.
74
+ padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`):
75
+ Select a strategy to pad the returned sequences (according to the model's padding side and padding
76
+ index) among:
77
+ - `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
78
+ sequence if provided).
79
+ - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
80
+ acceptable input length for the model if that argument is not provided.
81
+ - `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different
82
+ lengths).
83
+ max_length (`int`, *optional*):
84
+ Maximum length of the returned list and optionally padding length (see above).
85
+ do_pad (`bool`, *optional*, defaults to self.do_pad):
86
+ Whether to pad the image. If `True` will pad the images in the batch to the largest image in the batch
87
+ and create a pixel mask. Padding will be applied to the bottom and right of the image with zeros.
88
+ truncation (`bool`, *optional*):
89
+ Activates truncation to cut input sequences longer than `max_length` to `max_length`.
90
+ return_tensors (`str` or [`~utils.TensorType`], *optional*):
91
+ If set, will return tensors of a particular framework. Acceptable values are:
92
+
93
+ - `'tf'`: Return TensorFlow `tf.constant` objects.
94
+ - `'pt'`: Return PyTorch `torch.Tensor` objects.
95
+ - `'np'`: Return NumPy `np.ndarray` objects.
96
+ - `'jax'`: Return JAX `jnp.ndarray` objects.
97
+
98
+ Returns:
99
+ [`BatchFeature`]: A [`BatchFeature`] with the following fields:
100
+
101
+ - **input_ids** -- List of token ids to be fed to a model. Returned when `text` is not `None`.
102
+ - **attention_mask** -- List of indices specifying which tokens should be attended to by the model (when
103
+ `return_attention_mask=True` or if *"attention_mask"* is in `self.model_input_names` and if `text` is not
104
+ `None`).
105
+ - **pixel_values** -- Pixel values to be fed to a model. Returned when `images` is not `None`.
106
+ """
107
+ if images is not None:
108
+ image_inputs = self.image_processor(images, do_pad=do_pad, return_tensors=return_tensors)
109
+ return self._convert_images_texts_to_inputs(image_inputs, text, max_length=max_length)
110
+
111
+ # Copied from transformers.models.clip.processing_clip.CLIPProcessor.batch_decode with CLIP->Llama
112
+ def batch_decode(self, *args, **kwargs):
113
+ """
114
+ This method forwards all its arguments to LlamaTokenizerFast's [`~PreTrainedTokenizer.batch_decode`]. Please
115
+ refer to the docstring of this method for more information.
116
+ """
117
+ output_ids = args[0]
118
+ result_text = []
119
+ for result in output_ids:
120
+ result = result[result != 0]
121
+ if result[0] == self.tokenizer.bos_id:
122
+ result = result[1:]
123
+ if result[-1] == self.tokenizer.eos_id:
124
+ result = result[:-1]
125
+ result_text.append(self.tokenizer.decode(result, *args[1:], **kwargs).strip())
126
+ return result_text
127
+ # return self.tokenizer.batch_decode(*args, **kwargs)
128
+
129
+ # Copied from transformers.models.clip.processing_clip.CLIPProcessor.decode with CLIP->Llama
130
+ def decode(self, *args, **kwargs):
131
+ """
132
+ This method forwards all its arguments to LlamaTokenizerFast's [`~PreTrainedTokenizer.decode`]. Please refer to
133
+ the docstring of this method for more information.
134
+ """
135
+ result = args[0]
136
+ result = result[result != 0]
137
+ if result[0] == self.tokenizer.bos_id:
138
+ result = result[1:]
139
+ if result[-1] == self.tokenizer.eos_id or (hasattr(self.tokenizer, "eot_id") and result[-1] == self.tokenizer.eot_id):
140
+ result = result[:-1]
141
+ return self.tokenizer.decode(result, *args[1:], **kwargs).strip()
142
+
143
+ def _convert(
144
+ self, input_str, max_inp_length: Optional[int] = None
145
+ ):
146
+ if self.version == 2.5 or self.tokenizer.add_bos_token:
147
+ input_ids = self.tokenizer.encode(input_str)
148
+ else:
149
+ input_ids = [self.tokenizer.bos_id] + self.tokenizer.encode(input_str)
150
+ if max_inp_length is not None:
151
+ input_ids = input_ids[:max_inp_length]
152
+ input_ids = torch.tensor(input_ids, dtype=torch.int32)
153
+
154
+ image_start_tokens = torch.where(input_ids == self.tokenizer.im_start_id)[0]
155
+ image_start_tokens += 1
156
+ image_end_tokens = torch.where(input_ids == self.tokenizer.im_end_id)[0]
157
+ valid_image_nums = max(len(image_start_tokens), len(image_end_tokens))
158
+ image_bounds = torch.hstack(
159
+ [
160
+ image_start_tokens[:valid_image_nums].unsqueeze(-1),
161
+ image_end_tokens[:valid_image_nums].unsqueeze(-1),
162
+ ]
163
+ )
164
+ return input_ids.unsqueeze(0), image_bounds
165
+
166
+ def _convert_images_texts_to_inputs(self, images, texts, do_pad=False, truncation=None, max_length=None, return_tensors=None):
167
+ if not len(images):
168
+ model_inputs = self.tokenizer(texts, return_tensors=return_tensors, padding=do_pad, truncation=truncation, max_length=max_length)
169
+ return MiniCPMVBatchFeature(data={**model_inputs})
170
+
171
+ pattern = "()"
172
+ images, image_sizes, tgt_sizes = images["pixel_values"], images["image_sizes"], images["tgt_sizes"]
173
+
174
+ image_tags = re.findall(pattern, texts)
175
+ assert len(image_tags) == len(image_sizes[0])
176
+ text_chunks = texts.split(pattern)
177
+ final_texts = ""
178
+ for i in range(len(image_tags)):
179
+ final_texts = final_texts + text_chunks[i] + self.image_processor.get_slice_image_placeholder(image_sizes[0][i])
180
+ final_texts += text_chunks[-1]
181
+ input_ids, image_bounds = self._convert(final_texts, max_length)
182
+ return MiniCPMVBatchFeature(data={
183
+ "input_ids": input_ids,
184
+ "pixel_values": images,
185
+ "image_sizes": image_sizes,
186
+ "image_bound": [image_bounds],
187
+ "tgt_sizes": tgt_sizes
188
+ })
189
+
190
+ @property
191
+ # Copied from transformers.models.clip.processing_clip.CLIPProcessor.model_input_names
192
+ def model_input_names(self):
193
+ tokenizer_input_names = self.tokenizer.model_input_names
194
+ image_processor_input_names = self.image_processor.model_input_names
195
+ return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
196
+
197
+
198
+ def pad(self, orig_items, key, max_length=None, padding_value=0, padding_side="left"):
199
+ items = []
200
+ if isinstance(orig_items[0][key], list):
201
+ assert isinstance(orig_items[0][key][0], torch.Tensor)
202
+ for it in orig_items:
203
+ for tr in it[key]:
204
+ items.append({key: tr})
205
+ else:
206
+ assert isinstance(orig_items[0][key], torch.Tensor)
207
+ items = orig_items
208
+
209
+ batch_size = len(items)
210
+ shape = items[0][key].shape
211
+ dim = len(shape)
212
+ assert dim <= 3
213
+ if max_length is None:
214
+ max_length = 0
215
+ max_length = max(max_length, max(item[key].shape[-1] for item in items))
216
+ min_length = min(item[key].shape[-1] for item in items)
217
+ dtype = items[0][key].dtype
218
+
219
+ if dim == 1:
220
+ return torch.cat([item[key] for item in items], dim=0)
221
+ elif dim == 2:
222
+ if max_length == min_length:
223
+ return torch.cat([item[key] for item in items], dim=0)
224
+ tensor = torch.zeros((batch_size, max_length), dtype=dtype) + padding_value
225
+ else:
226
+ tensor = (
227
+ torch.zeros((batch_size, max_length, shape[-1]), dtype=dtype)
228
+ + padding_value
229
+ )
230
+
231
+ for i, item in enumerate(items):
232
+ if dim == 2:
233
+ if padding_side == "left":
234
+ tensor[i, -len(item[key][0]) :] = item[key][0].clone()
235
+ else:
236
+ tensor[i, : len(item[key][0])] = item[key][0].clone()
237
+ elif dim == 3:
238
+ if padding_side == "left":
239
+ tensor[i, -len(item[key][0]) :, :] = item[key][0].clone()
240
+ else:
241
+ tensor[i, : len(item[key][0]), :] = item[key][0].clone()
242
+
243
+ return tensor
244
+
resampler.py ADDED
@@ -0,0 +1,812 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from functools import partial
2
+ import numpy as np
3
+ import warnings
4
+ from typing import Optional, Tuple
5
+ import torch
6
+ from torch import nn
7
+ from torch import Tensor
8
+ import torch.nn.functional as F
9
+ from torch.nn.functional import *
10
+ from torch.nn.modules.activation import *
11
+ from torch.nn.init import trunc_normal_
12
+ from torch.nn.init import constant_, xavier_normal_, xavier_uniform_
13
+ from transformers import PreTrainedModel
14
+ from transformers.integrations import is_deepspeed_zero3_enabled
15
+
16
+ def get_2d_sincos_pos_embed(embed_dim, image_size):
17
+ """
18
+ image_size: image_size or (image_height, image_width)
19
+ return:
20
+ pos_embed: [image_height, image_width, embed_dim]
21
+ """
22
+ if isinstance(image_size, int):
23
+ grid_h_size, grid_w_size = image_size, image_size
24
+ else:
25
+ grid_h_size, grid_w_size = image_size[0], image_size[1]
26
+
27
+ grid_h = np.arange(grid_h_size, dtype=np.float32)
28
+ grid_w = np.arange(grid_w_size, dtype=np.float32)
29
+ grid = np.meshgrid(grid_w, grid_h) # here w goes first
30
+ grid = np.stack(grid, axis=0)
31
+
32
+ pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid)
33
+ return pos_embed
34
+
35
+
36
+ def get_2d_sincos_pos_embed_from_grid(embed_dim, grid):
37
+ assert embed_dim % 2 == 0
38
+
39
+ # use half of dimensions to encode grid_h
40
+ emb_h = get_1d_sincos_pos_embed_from_grid_new(embed_dim // 2, grid[0]) # (H, W, D/2)
41
+ emb_w = get_1d_sincos_pos_embed_from_grid_new(embed_dim // 2, grid[1]) # (H, W, D/2)
42
+
43
+ emb = np.concatenate([emb_h, emb_w], axis=-1) # (H, W, D)
44
+ return emb
45
+
46
+
47
+ def get_1d_sincos_pos_embed_from_grid_new(embed_dim, pos):
48
+ """
49
+ embed_dim: output dimension for each position
50
+ pos: a list of positions to be encoded: size (H, W)
51
+ out: (H, W, D)
52
+ """
53
+ assert embed_dim % 2 == 0
54
+ omega = np.arange(embed_dim // 2, dtype=np.float32)
55
+ omega /= embed_dim / 2.
56
+ omega = 1. / 10000 ** omega # (D/2,)
57
+
58
+ out = np.einsum('hw,d->hwd', pos, omega) # (H, W, D/2), outer product
59
+
60
+ emb_sin = np.sin(out) # (H, W, D/2)
61
+ emb_cos = np.cos(out) # (H, W, D/2)
62
+
63
+ emb = np.concatenate([emb_sin, emb_cos], axis=-1) # (H, W, D)
64
+ return emb
65
+
66
+
67
+ class Resampler(nn.Module):
68
+ """
69
+ A 2D perceiver-resampler network with one cross attention layers by
70
+ given learnable queries and 2d sincos pos_emb
71
+ Outputs:
72
+ A tensor with the shape of (batch_size, num_queries, embed_dim)
73
+ """
74
+
75
+ def __init__(
76
+ self,
77
+ num_queries,
78
+ embed_dim,
79
+ num_heads,
80
+ kv_dim=None,
81
+ norm_layer=partial(nn.LayerNorm, eps=1e-6),
82
+ adaptive=False,
83
+ max_size=(70, 70),
84
+ ):
85
+ super().__init__()
86
+ self.num_queries = num_queries
87
+ self.embed_dim = embed_dim
88
+ self.num_heads = num_heads
89
+ self.adaptive = adaptive
90
+ self.max_size = max_size
91
+
92
+ self.query = nn.Parameter(torch.zeros(self.num_queries, embed_dim))
93
+
94
+ if kv_dim is not None and kv_dim != embed_dim:
95
+ self.kv_proj = nn.Linear(kv_dim, embed_dim, bias=False)
96
+ else:
97
+ self.kv_proj = nn.Identity()
98
+
99
+ self.attn = MultiheadAttention(embed_dim, num_heads)
100
+ self.ln_q = norm_layer(embed_dim)
101
+ self.ln_kv = norm_layer(embed_dim)
102
+
103
+ self.ln_post = norm_layer(embed_dim)
104
+ self.proj = nn.Parameter((embed_dim ** -0.5) * torch.randn(embed_dim, embed_dim))
105
+
106
+ self._set_2d_pos_cache(self.max_size)
107
+
108
+ def _set_2d_pos_cache(self, max_size, device='cpu'):
109
+ if is_deepspeed_zero3_enabled():
110
+ device='cuda'
111
+ pos_embed = torch.from_numpy(get_2d_sincos_pos_embed(self.embed_dim, max_size)).float().to(device)
112
+ self.register_buffer("pos_embed", pos_embed, persistent=False)
113
+
114
+ def _adjust_pos_cache(self, tgt_sizes, device):
115
+ max_h = torch.max(tgt_sizes[:, 0])
116
+ max_w = torch.max(tgt_sizes[:, 1])
117
+ if max_h > self.max_size[0] or max_w > self.max_size[1]:
118
+ self.max_size = [max(max_h, self.max_size[0]), max(max_w, self.max_size[1])]
119
+ self._set_2d_pos_cache(self.max_size, device)
120
+
121
+ def _init_weights(self, m):
122
+ if isinstance(m, nn.Linear):
123
+ trunc_normal_(m.weight, std=.02)
124
+ if isinstance(m, nn.Linear) and m.bias is not None:
125
+ nn.init.constant_(m.bias, 0)
126
+ elif isinstance(m, nn.LayerNorm):
127
+ nn.init.constant_(m.bias, 0)
128
+ nn.init.constant_(m.weight, 1.0)
129
+
130
+ def forward(self, x, tgt_sizes=None):
131
+ assert x.shape[0] == tgt_sizes.shape[0]
132
+ bs = x.shape[0]
133
+
134
+ device = x.device
135
+ dtype = x.dtype
136
+
137
+ patch_len = tgt_sizes[:, 0] * tgt_sizes[:, 1]
138
+
139
+ self._adjust_pos_cache(tgt_sizes, device=device)
140
+
141
+ max_patch_len = torch.max(patch_len)
142
+ key_padding_mask = torch.zeros((bs, max_patch_len), dtype=torch.bool, device=device)
143
+
144
+ pos_embed = []
145
+ for i in range(bs):
146
+ tgt_h, tgt_w = tgt_sizes[i]
147
+ pos_embed.append(self.pos_embed[:tgt_h, :tgt_w, :].reshape((tgt_h * tgt_w, -1)).to(dtype)) # patches * D
148
+ key_padding_mask[i, patch_len[i]:] = True
149
+
150
+ pos_embed = torch.nn.utils.rnn.pad_sequence(
151
+ pos_embed, batch_first=True, padding_value=0.0).permute(1, 0, 2) # BLD => L * B * D
152
+
153
+ x = self.kv_proj(x) # B * L * D
154
+ x = self.ln_kv(x).permute(1, 0, 2) # L * B * D
155
+
156
+ q = self.ln_q(self.query) # Q * D
157
+
158
+ out = self.attn(
159
+ self._repeat(q, bs), # Q * B * D
160
+ x + pos_embed, # L * B * D + L * B * D
161
+ x,
162
+ key_padding_mask=key_padding_mask)[0]
163
+ # out: Q * B * D
164
+ x = out.permute(1, 0, 2) # B * Q * D
165
+
166
+ x = self.ln_post(x)
167
+ x = x @ self.proj
168
+ return x
169
+
170
+ def _repeat(self, query, N: int):
171
+ return query.unsqueeze(1).repeat(1, N, 1)
172
+
173
+
174
+ class MultiheadAttention(nn.MultiheadAttention):
175
+ def __init__(self, embed_dim, num_heads, dropout=0., bias=True, add_bias_kv=False,
176
+ add_zero_attn=False, kdim=None, vdim=None, batch_first=False, device=None, dtype=None):
177
+ super().__init__(embed_dim, num_heads, dropout, bias, add_bias_kv, add_zero_attn, kdim, vdim, batch_first, device, dtype)
178
+
179
+ # rewrite out_proj layer,with nn.Linear
180
+ self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias, device=device, dtype=dtype)
181
+
182
+ def forward(
183
+ self,
184
+ query: Tensor,
185
+ key: Tensor,
186
+ value: Tensor,
187
+ key_padding_mask: Optional[Tensor] = None,
188
+ need_weights: bool = True,
189
+ attn_mask: Optional[Tensor] = None,
190
+ average_attn_weights: bool = True,
191
+ is_causal : bool = False) -> Tuple[Tensor, Optional[Tensor]]:
192
+ why_not_fast_path = ''
193
+ if ((attn_mask is not None and torch.is_floating_point(attn_mask))
194
+ or (key_padding_mask is not None) and torch.is_floating_point(key_padding_mask)):
195
+ why_not_fast_path = "floating-point masks are not supported for fast path."
196
+
197
+ is_batched = query.dim() == 3
198
+
199
+ key_padding_mask = F._canonical_mask(
200
+ mask=key_padding_mask,
201
+ mask_name="key_padding_mask",
202
+ other_type=F._none_or_dtype(attn_mask),
203
+ other_name="attn_mask",
204
+ target_type=query.dtype
205
+ )
206
+
207
+ attn_mask = F._canonical_mask(
208
+ mask=attn_mask,
209
+ mask_name="attn_mask",
210
+ other_type=None,
211
+ other_name="",
212
+ target_type=query.dtype,
213
+ check_other=False,
214
+ )
215
+
216
+
217
+ if not is_batched:
218
+ why_not_fast_path = f"input not batched; expected query.dim() of 3 but got {query.dim()}"
219
+ elif query is not key or key is not value:
220
+ # When lifting this restriction, don't forget to either
221
+ # enforce that the dtypes all match or test cases where
222
+ # they don't!
223
+ why_not_fast_path = "non-self attention was used (query, key, and value are not the same Tensor)"
224
+ elif self.in_proj_bias is not None and query.dtype != self.in_proj_bias.dtype:
225
+ why_not_fast_path = f"dtypes of query ({query.dtype}) and self.in_proj_bias ({self.in_proj_bias.dtype}) don't match"
226
+ elif self.in_proj_weight is None:
227
+ why_not_fast_path = "in_proj_weight was None"
228
+ elif query.dtype != self.in_proj_weight.dtype:
229
+ # this case will fail anyway, but at least they'll get a useful error message.
230
+ why_not_fast_path = f"dtypes of query ({query.dtype}) and self.in_proj_weight ({self.in_proj_weight.dtype}) don't match"
231
+ elif self.training:
232
+ why_not_fast_path = "training is enabled"
233
+ elif (self.num_heads % 2) != 0:
234
+ why_not_fast_path = "self.num_heads is not even"
235
+ elif not self.batch_first:
236
+ why_not_fast_path = "batch_first was not True"
237
+ elif self.bias_k is not None:
238
+ why_not_fast_path = "self.bias_k was not None"
239
+ elif self.bias_v is not None:
240
+ why_not_fast_path = "self.bias_v was not None"
241
+ elif self.add_zero_attn:
242
+ why_not_fast_path = "add_zero_attn was enabled"
243
+ elif not self._qkv_same_embed_dim:
244
+ why_not_fast_path = "_qkv_same_embed_dim was not True"
245
+ elif query.is_nested and (key_padding_mask is not None or attn_mask is not None):
246
+ why_not_fast_path = "supplying both src_key_padding_mask and src_mask at the same time \
247
+ is not supported with NestedTensor input"
248
+ elif torch.is_autocast_enabled():
249
+ why_not_fast_path = "autocast is enabled"
250
+
251
+ if not why_not_fast_path:
252
+ tensor_args = (
253
+ query,
254
+ key,
255
+ value,
256
+ self.in_proj_weight,
257
+ self.in_proj_bias,
258
+ self.out_proj.weight,
259
+ self.out_proj.bias,
260
+ )
261
+ # We have to use list comprehensions below because TorchScript does not support
262
+ # generator expressions.
263
+ if torch.overrides.has_torch_function(tensor_args):
264
+ why_not_fast_path = "some Tensor argument has_torch_function"
265
+ elif _is_make_fx_tracing():
266
+ why_not_fast_path = "we are running make_fx tracing"
267
+ elif not all(_check_arg_device(x) for x in tensor_args):
268
+ why_not_fast_path = ("some Tensor argument's device is neither one of "
269
+ f"cpu, cuda or {torch.utils.backend_registration._privateuse1_backend_name}")
270
+ elif torch.is_grad_enabled() and any(_arg_requires_grad(x) for x in tensor_args):
271
+ why_not_fast_path = ("grad is enabled and at least one of query or the "
272
+ "input/output projection weights or biases requires_grad")
273
+ if not why_not_fast_path:
274
+ merged_mask, mask_type = self.merge_masks(attn_mask, key_padding_mask, query)
275
+
276
+ if self.in_proj_bias is not None and self.in_proj_weight is not None:
277
+ return torch._native_multi_head_attention(
278
+ query,
279
+ key,
280
+ value,
281
+ self.embed_dim,
282
+ self.num_heads,
283
+ self.in_proj_weight,
284
+ self.in_proj_bias,
285
+ self.out_proj.weight,
286
+ self.out_proj.bias,
287
+ merged_mask,
288
+ need_weights,
289
+ average_attn_weights,
290
+ mask_type)
291
+
292
+ any_nested = query.is_nested or key.is_nested or value.is_nested
293
+ assert not any_nested, ("MultiheadAttention does not support NestedTensor outside of its fast path. " +
294
+ f"The fast path was not hit because {why_not_fast_path}")
295
+
296
+ if self.batch_first and is_batched:
297
+ # make sure that the transpose op does not affect the "is" property
298
+ if key is value:
299
+ if query is key:
300
+ query = key = value = query.transpose(1, 0)
301
+ else:
302
+ query, key = (x.transpose(1, 0) for x in (query, key))
303
+ value = key
304
+ else:
305
+ query, key, value = (x.transpose(1, 0) for x in (query, key, value))
306
+
307
+ if not self._qkv_same_embed_dim:
308
+ attn_output, attn_output_weights = self.multi_head_attention_forward(
309
+ query, key, value, self.embed_dim, self.num_heads,
310
+ self.in_proj_weight, self.in_proj_bias,
311
+ self.bias_k, self.bias_v, self.add_zero_attn,
312
+ self.dropout, self.out_proj.weight, self.out_proj.bias,
313
+ training=self.training,
314
+ key_padding_mask=key_padding_mask, need_weights=need_weights,
315
+ attn_mask=attn_mask,
316
+ use_separate_proj_weight=True,
317
+ q_proj_weight=self.q_proj_weight, k_proj_weight=self.k_proj_weight,
318
+ v_proj_weight=self.v_proj_weight,
319
+ average_attn_weights=average_attn_weights,
320
+ is_causal=is_causal)
321
+ else:
322
+ attn_output, attn_output_weights = self.multi_head_attention_forward(
323
+ query, key, value, self.embed_dim, self.num_heads,
324
+ self.in_proj_weight, self.in_proj_bias,
325
+ self.bias_k, self.bias_v, self.add_zero_attn,
326
+ self.dropout, self.out_proj.weight, self.out_proj.bias,
327
+ training=self.training,
328
+ key_padding_mask=key_padding_mask,
329
+ need_weights=need_weights,
330
+ attn_mask=attn_mask,
331
+ average_attn_weights=average_attn_weights,
332
+ is_causal=is_causal)
333
+ if self.batch_first and is_batched:
334
+ return attn_output.transpose(1, 0), attn_output_weights
335
+ else:
336
+ return attn_output, attn_output_weights
337
+
338
+ def multi_head_attention_forward(
339
+ self,
340
+ query: Tensor,
341
+ key: Tensor,
342
+ value: Tensor,
343
+ embed_dim_to_check: int,
344
+ num_heads: int,
345
+ in_proj_weight: Optional[Tensor],
346
+ in_proj_bias: Optional[Tensor],
347
+ bias_k: Optional[Tensor],
348
+ bias_v: Optional[Tensor],
349
+ add_zero_attn: bool,
350
+ dropout_p: float,
351
+ out_proj_weight: Tensor,
352
+ out_proj_bias: Optional[Tensor],
353
+ training: bool = True,
354
+ key_padding_mask: Optional[Tensor] = None,
355
+ need_weights: bool = True,
356
+ attn_mask: Optional[Tensor] = None,
357
+ use_separate_proj_weight: bool = False,
358
+ q_proj_weight: Optional[Tensor] = None,
359
+ k_proj_weight: Optional[Tensor] = None,
360
+ v_proj_weight: Optional[Tensor] = None,
361
+ static_k: Optional[Tensor] = None,
362
+ static_v: Optional[Tensor] = None,
363
+ average_attn_weights: bool = True,
364
+ is_causal: bool = False,
365
+ ) -> Tuple[Tensor, Optional[Tensor]]:
366
+ tens_ops = (query, key, value, in_proj_weight, in_proj_bias, bias_k, bias_v, out_proj_weight, out_proj_bias)
367
+ if has_torch_function(tens_ops):
368
+ return handle_torch_function(
369
+ multi_head_attention_forward,
370
+ tens_ops,
371
+ query,
372
+ key,
373
+ value,
374
+ embed_dim_to_check,
375
+ num_heads,
376
+ in_proj_weight,
377
+ in_proj_bias,
378
+ bias_k,
379
+ bias_v,
380
+ add_zero_attn,
381
+ dropout_p,
382
+ out_proj_weight,
383
+ out_proj_bias,
384
+ training=training,
385
+ key_padding_mask=key_padding_mask,
386
+ need_weights=need_weights,
387
+ attn_mask=attn_mask,
388
+ is_causal=is_causal,
389
+ use_separate_proj_weight=use_separate_proj_weight,
390
+ q_proj_weight=q_proj_weight,
391
+ k_proj_weight=k_proj_weight,
392
+ v_proj_weight=v_proj_weight,
393
+ static_k=static_k,
394
+ static_v=static_v,
395
+ average_attn_weights=average_attn_weights,
396
+ )
397
+
398
+ is_batched = _mha_shape_check(query, key, value, key_padding_mask, attn_mask, num_heads)
399
+
400
+ # For unbatched input, we unsqueeze at the expected batch-dim to pretend that the input
401
+ # is batched, run the computation and before returning squeeze the
402
+ # batch dimension so that the output doesn't carry this temporary batch dimension.
403
+ if not is_batched:
404
+ # unsqueeze if the input is unbatched
405
+ query = query.unsqueeze(1)
406
+ key = key.unsqueeze(1)
407
+ value = value.unsqueeze(1)
408
+ if key_padding_mask is not None:
409
+ key_padding_mask = key_padding_mask.unsqueeze(0)
410
+
411
+ # set up shape vars
412
+ tgt_len, bsz, embed_dim = query.shape
413
+ src_len, _, _ = key.shape
414
+
415
+ key_padding_mask = _canonical_mask(
416
+ mask=key_padding_mask,
417
+ mask_name="key_padding_mask",
418
+ other_type=_none_or_dtype(attn_mask),
419
+ other_name="attn_mask",
420
+ target_type=query.dtype
421
+ )
422
+
423
+ if is_causal and attn_mask is None:
424
+ raise RuntimeError(
425
+ "Need attn_mask if specifying the is_causal hint. "
426
+ "You may use the Transformer module method "
427
+ "`generate_square_subsequent_mask` to create this mask."
428
+ )
429
+
430
+ if is_causal and key_padding_mask is None and not need_weights:
431
+ # when we have a kpm or need weights, we need attn_mask
432
+ # Otherwise, we use the is_causal hint go as is_causal
433
+ # indicator to SDPA.
434
+ attn_mask = None
435
+ else:
436
+ attn_mask = _canonical_mask(
437
+ mask=attn_mask,
438
+ mask_name="attn_mask",
439
+ other_type=None,
440
+ other_name="",
441
+ target_type=query.dtype,
442
+ check_other=False,
443
+ )
444
+
445
+ if key_padding_mask is not None:
446
+ # We have the attn_mask, and use that to merge kpm into it.
447
+ # Turn off use of is_causal hint, as the merged mask is no
448
+ # longer causal.
449
+ is_causal = False
450
+
451
+ assert embed_dim == embed_dim_to_check, \
452
+ f"was expecting embedding dimension of {embed_dim_to_check}, but got {embed_dim}"
453
+ if isinstance(embed_dim, torch.Tensor):
454
+ # embed_dim can be a tensor when JIT tracing
455
+ head_dim = embed_dim.div(num_heads, rounding_mode='trunc')
456
+ else:
457
+ head_dim = embed_dim // num_heads
458
+ assert head_dim * num_heads == embed_dim, f"embed_dim {embed_dim} not divisible by num_heads {num_heads}"
459
+ if use_separate_proj_weight:
460
+ # allow MHA to have different embedding dimensions when separate projection weights are used
461
+ assert key.shape[:2] == value.shape[:2], \
462
+ f"key's sequence and batch dims {key.shape[:2]} do not match value's {value.shape[:2]}"
463
+ else:
464
+ assert key.shape == value.shape, f"key shape {key.shape} does not match value shape {value.shape}"
465
+
466
+ #
467
+ # compute in-projection
468
+ #
469
+ if not use_separate_proj_weight:
470
+ assert in_proj_weight is not None, "use_separate_proj_weight is False but in_proj_weight is None"
471
+ q, k, v = _in_projection_packed(query, key, value, in_proj_weight, in_proj_bias)
472
+ else:
473
+ assert q_proj_weight is not None, "use_separate_proj_weight is True but q_proj_weight is None"
474
+ assert k_proj_weight is not None, "use_separate_proj_weight is True but k_proj_weight is None"
475
+ assert v_proj_weight is not None, "use_separate_proj_weight is True but v_proj_weight is None"
476
+ if in_proj_bias is None:
477
+ b_q = b_k = b_v = None
478
+ else:
479
+ b_q, b_k, b_v = in_proj_bias.chunk(3)
480
+ q, k, v = _in_projection(query, key, value, q_proj_weight, k_proj_weight, v_proj_weight, b_q, b_k, b_v)
481
+
482
+ # prep attention mask
483
+
484
+ if attn_mask is not None:
485
+ # ensure attn_mask's dim is 3
486
+ if attn_mask.dim() == 2:
487
+ correct_2d_size = (tgt_len, src_len)
488
+ if attn_mask.shape != correct_2d_size:
489
+ raise RuntimeError(f"The shape of the 2D attn_mask is {attn_mask.shape}, but should be {correct_2d_size}.")
490
+ attn_mask = attn_mask.unsqueeze(0)
491
+ elif attn_mask.dim() == 3:
492
+ correct_3d_size = (bsz * num_heads, tgt_len, src_len)
493
+ if attn_mask.shape != correct_3d_size:
494
+ raise RuntimeError(f"The shape of the 3D attn_mask is {attn_mask.shape}, but should be {correct_3d_size}.")
495
+ else:
496
+ raise RuntimeError(f"attn_mask's dimension {attn_mask.dim()} is not supported")
497
+
498
+ # add bias along batch dimension (currently second)
499
+ if bias_k is not None and bias_v is not None:
500
+ assert static_k is None, "bias cannot be added to static key."
501
+ assert static_v is None, "bias cannot be added to static value."
502
+ k = torch.cat([k, bias_k.repeat(1, bsz, 1)])
503
+ v = torch.cat([v, bias_v.repeat(1, bsz, 1)])
504
+ if attn_mask is not None:
505
+ attn_mask = pad(attn_mask, (0, 1))
506
+ if key_padding_mask is not None:
507
+ key_padding_mask = pad(key_padding_mask, (0, 1))
508
+ else:
509
+ assert bias_k is None
510
+ assert bias_v is None
511
+
512
+ #
513
+ # reshape q, k, v for multihead attention and make em batch first
514
+ #
515
+ q = q.view(tgt_len, bsz * num_heads, head_dim).transpose(0, 1)
516
+ if static_k is None:
517
+ k = k.view(k.shape[0], bsz * num_heads, head_dim).transpose(0, 1)
518
+ else:
519
+ # TODO finish disentangling control flow so we don't do in-projections when statics are passed
520
+ assert static_k.size(0) == bsz * num_heads, \
521
+ f"expecting static_k.size(0) of {bsz * num_heads}, but got {static_k.size(0)}"
522
+ assert static_k.size(2) == head_dim, \
523
+ f"expecting static_k.size(2) of {head_dim}, but got {static_k.size(2)}"
524
+ k = static_k
525
+ if static_v is None:
526
+ v = v.view(v.shape[0], bsz * num_heads, head_dim).transpose(0, 1)
527
+ else:
528
+ # TODO finish disentangling control flow so we don't do in-projections when statics are passed
529
+ assert static_v.size(0) == bsz * num_heads, \
530
+ f"expecting static_v.size(0) of {bsz * num_heads}, but got {static_v.size(0)}"
531
+ assert static_v.size(2) == head_dim, \
532
+ f"expecting static_v.size(2) of {head_dim}, but got {static_v.size(2)}"
533
+ v = static_v
534
+
535
+ # add zero attention along batch dimension (now first)
536
+ if add_zero_attn:
537
+ zero_attn_shape = (bsz * num_heads, 1, head_dim)
538
+ k = torch.cat([k, torch.zeros(zero_attn_shape, dtype=k.dtype, device=k.device)], dim=1)
539
+ v = torch.cat([v, torch.zeros(zero_attn_shape, dtype=v.dtype, device=v.device)], dim=1)
540
+ if attn_mask is not None:
541
+ attn_mask = pad(attn_mask, (0, 1))
542
+ if key_padding_mask is not None:
543
+ key_padding_mask = pad(key_padding_mask, (0, 1))
544
+
545
+ # update source sequence length after adjustments
546
+ src_len = k.size(1)
547
+
548
+ # merge key padding and attention masks
549
+ if key_padding_mask is not None:
550
+ assert key_padding_mask.shape == (bsz, src_len), \
551
+ f"expecting key_padding_mask shape of {(bsz, src_len)}, but got {key_padding_mask.shape}"
552
+ key_padding_mask = key_padding_mask.view(bsz, 1, 1, src_len). \
553
+ expand(-1, num_heads, -1, -1).reshape(bsz * num_heads, 1, src_len)
554
+ if attn_mask is None:
555
+ attn_mask = key_padding_mask
556
+ else:
557
+ attn_mask = attn_mask + key_padding_mask
558
+
559
+ # adjust dropout probability
560
+ if not training:
561
+ dropout_p = 0.0
562
+
563
+ #
564
+ # (deep breath) calculate attention and out projection
565
+ #
566
+
567
+ if need_weights:
568
+ B, Nt, E = q.shape
569
+ q_scaled = q / math.sqrt(E)
570
+
571
+ assert not (is_causal and attn_mask is None), "FIXME: is_causal not implemented for need_weights"
572
+
573
+ if attn_mask is not None:
574
+ attn_output_weights = torch.baddbmm(attn_mask, q_scaled, k.transpose(-2, -1))
575
+ else:
576
+ attn_output_weights = torch.bmm(q_scaled, k.transpose(-2, -1))
577
+ attn_output_weights = softmax(attn_output_weights, dim=-1)
578
+ if dropout_p > 0.0:
579
+ attn_output_weights = dropout(attn_output_weights, p=dropout_p)
580
+
581
+ attn_output = torch.bmm(attn_output_weights, v)
582
+
583
+ attn_output = attn_output.transpose(0, 1).contiguous().view(tgt_len * bsz, embed_dim)
584
+ attn_output = self.out_proj(attn_output)
585
+ attn_output = attn_output.view(tgt_len, bsz, attn_output.size(1))
586
+
587
+ # optionally average attention weights over heads
588
+ attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)
589
+ if average_attn_weights:
590
+ attn_output_weights = attn_output_weights.mean(dim=1)
591
+
592
+ if not is_batched:
593
+ # squeeze the output if input was unbatched
594
+ attn_output = attn_output.squeeze(1)
595
+ attn_output_weights = attn_output_weights.squeeze(0)
596
+ return attn_output, attn_output_weights
597
+ else:
598
+ # attn_mask can be either (L,S) or (N*num_heads, L, S)
599
+ # if attn_mask's shape is (1, L, S) we need to unsqueeze to (1, 1, L, S)
600
+ # in order to match the input for SDPA of (N, num_heads, L, S)
601
+ if attn_mask is not None:
602
+ if attn_mask.size(0) == 1 and attn_mask.dim() == 3:
603
+ attn_mask = attn_mask.unsqueeze(0)
604
+ else:
605
+ attn_mask = attn_mask.view(bsz, num_heads, -1, src_len)
606
+
607
+ q = q.view(bsz, num_heads, tgt_len, head_dim)
608
+ k = k.view(bsz, num_heads, src_len, head_dim)
609
+ v = v.view(bsz, num_heads, src_len, head_dim)
610
+
611
+ attn_output = F.scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal)
612
+ attn_output = attn_output.permute(2, 0, 1, 3).contiguous().view(bsz * tgt_len, embed_dim)
613
+
614
+ attn_output = self.out_proj(attn_output)
615
+ attn_output = attn_output.view(tgt_len, bsz, attn_output.size(1))
616
+ if not is_batched:
617
+ # squeeze the output if input was unbatched
618
+ attn_output = attn_output.squeeze(1)
619
+ return attn_output, None
620
+
621
+
622
+ def _mha_shape_check(query: Tensor, key: Tensor, value: Tensor,
623
+ key_padding_mask: Optional[Tensor], attn_mask: Optional[Tensor], num_heads: int):
624
+ # Verifies the expected shape for `query, `key`, `value`, `key_padding_mask` and `attn_mask`
625
+ # and returns if the input is batched or not.
626
+ # Raises an error if `query` is not 2-D (unbatched) or 3-D (batched) tensor.
627
+
628
+ # Shape check.
629
+ if query.dim() == 3:
630
+ # Batched Inputs
631
+ is_batched = True
632
+ assert key.dim() == 3 and value.dim() == 3, \
633
+ ("For batched (3-D) `query`, expected `key` and `value` to be 3-D"
634
+ f" but found {key.dim()}-D and {value.dim()}-D tensors respectively")
635
+ if key_padding_mask is not None:
636
+ assert key_padding_mask.dim() == 2, \
637
+ ("For batched (3-D) `query`, expected `key_padding_mask` to be `None` or 2-D"
638
+ f" but found {key_padding_mask.dim()}-D tensor instead")
639
+ if attn_mask is not None:
640
+ assert attn_mask.dim() in (2, 3), \
641
+ ("For batched (3-D) `query`, expected `attn_mask` to be `None`, 2-D or 3-D"
642
+ f" but found {attn_mask.dim()}-D tensor instead")
643
+ elif query.dim() == 2:
644
+ # Unbatched Inputs
645
+ is_batched = False
646
+ assert key.dim() == 2 and value.dim() == 2, \
647
+ ("For unbatched (2-D) `query`, expected `key` and `value` to be 2-D"
648
+ f" but found {key.dim()}-D and {value.dim()}-D tensors respectively")
649
+
650
+ if key_padding_mask is not None:
651
+ assert key_padding_mask.dim() == 1, \
652
+ ("For unbatched (2-D) `query`, expected `key_padding_mask` to be `None` or 1-D"
653
+ f" but found {key_padding_mask.dim()}-D tensor instead")
654
+
655
+ if attn_mask is not None:
656
+ assert attn_mask.dim() in (2, 3), \
657
+ ("For unbatched (2-D) `query`, expected `attn_mask` to be `None`, 2-D or 3-D"
658
+ f" but found {attn_mask.dim()}-D tensor instead")
659
+ if attn_mask.dim() == 3:
660
+ expected_shape = (num_heads, query.shape[0], key.shape[0])
661
+ assert attn_mask.shape == expected_shape, \
662
+ (f"Expected `attn_mask` shape to be {expected_shape} but got {attn_mask.shape}")
663
+ else:
664
+ raise AssertionError(
665
+ f"query should be unbatched 2D or batched 3D tensor but received {query.dim()}-D query tensor")
666
+
667
+ return is_batched
668
+
669
+
670
+ def _canonical_mask(
671
+ mask: Optional[Tensor],
672
+ mask_name: str,
673
+ other_type: Optional[DType],
674
+ other_name: str,
675
+ target_type: DType,
676
+ check_other: bool = True,
677
+ ) -> Optional[Tensor]:
678
+
679
+ if mask is not None:
680
+ _mask_dtype = mask.dtype
681
+ _mask_is_float = torch.is_floating_point(mask)
682
+ if _mask_dtype != torch.bool and not _mask_is_float:
683
+ raise AssertionError(
684
+ f"only bool and floating types of {mask_name} are supported")
685
+ if check_other and other_type is not None:
686
+ if _mask_dtype != other_type:
687
+ warnings.warn(
688
+ f"Support for mismatched {mask_name} and {other_name} "
689
+ "is deprecated. Use same type for both instead."
690
+ )
691
+ if not _mask_is_float:
692
+ mask = (
693
+ torch.zeros_like(mask, dtype=target_type)
694
+ .masked_fill_(mask, float("-inf"))
695
+ )
696
+ return mask
697
+
698
+
699
+ def _none_or_dtype(input: Optional[Tensor]) -> Optional[DType]:
700
+ if input is None:
701
+ return None
702
+ elif isinstance(input, torch.Tensor):
703
+ return input.dtype
704
+ raise RuntimeError("input to _none_or_dtype() must be None or torch.Tensor")
705
+
706
+ def _in_projection_packed(
707
+ q: Tensor,
708
+ k: Tensor,
709
+ v: Tensor,
710
+ w: Tensor,
711
+ b: Optional[Tensor] = None,
712
+ ) -> List[Tensor]:
713
+ r"""
714
+ Performs the in-projection step of the attention operation, using packed weights.
715
+ Output is a triple containing projection tensors for query, key and value.
716
+ Args:
717
+ q, k, v: query, key and value tensors to be projected. For self-attention,
718
+ these are typically the same tensor; for encoder-decoder attention,
719
+ k and v are typically the same tensor. (We take advantage of these
720
+ identities for performance if they are present.) Regardless, q, k and v
721
+ must share a common embedding dimension; otherwise their shapes may vary.
722
+ w: projection weights for q, k and v, packed into a single tensor. Weights
723
+ are packed along dimension 0, in q, k, v order.
724
+ b: optional projection biases for q, k and v, packed into a single tensor
725
+ in q, k, v order.
726
+ Shape:
727
+ Inputs:
728
+ - q: :math:`(..., E)` where E is the embedding dimension
729
+ - k: :math:`(..., E)` where E is the embedding dimension
730
+ - v: :math:`(..., E)` where E is the embedding dimension
731
+ - w: :math:`(E * 3, E)` where E is the embedding dimension
732
+ - b: :math:`E * 3` where E is the embedding dimension
733
+ Output:
734
+ - in output list :math:`[q', k', v']`, each output tensor will have the
735
+ same shape as the corresponding input tensor.
736
+ """
737
+ E = q.size(-1)
738
+ if k is v:
739
+ if q is k:
740
+ # self-attention
741
+ proj = linear(q, w, b)
742
+ # reshape to 3, E and not E, 3 is deliberate for better memory coalescing and keeping same order as chunk()
743
+ proj = proj.unflatten(-1, (3, E)).unsqueeze(0).transpose(0, -2).squeeze(-2).contiguous()
744
+ return proj[0], proj[1], proj[2]
745
+ else:
746
+ # encoder-decoder attention
747
+ w_q, w_kv = w.split([E, E * 2])
748
+ if b is None:
749
+ b_q = b_kv = None
750
+ else:
751
+ b_q, b_kv = b.split([E, E * 2])
752
+ q_proj = linear(q, w_q, b_q)
753
+ kv_proj = linear(k, w_kv, b_kv)
754
+ # reshape to 2, E and not E, 2 is deliberate for better memory coalescing and keeping same order as chunk()
755
+ kv_proj = kv_proj.unflatten(-1, (2, E)).unsqueeze(0).transpose(0, -2).squeeze(-2).contiguous()
756
+ return (q_proj, kv_proj[0], kv_proj[1])
757
+ else:
758
+ w_q, w_k, w_v = w.chunk(3)
759
+ if b is None:
760
+ b_q = b_k = b_v = None
761
+ else:
762
+ b_q, b_k, b_v = b.chunk(3)
763
+ return linear(q, w_q, b_q), linear(k, w_k, b_k), linear(v, w_v, b_v)
764
+
765
+
766
+ def _in_projection(
767
+ q: Tensor,
768
+ k: Tensor,
769
+ v: Tensor,
770
+ w_q: Tensor,
771
+ w_k: Tensor,
772
+ w_v: Tensor,
773
+ b_q: Optional[Tensor] = None,
774
+ b_k: Optional[Tensor] = None,
775
+ b_v: Optional[Tensor] = None,
776
+ ) -> Tuple[Tensor, Tensor, Tensor]:
777
+ r"""
778
+ Performs the in-projection step of the attention operation. This is simply
779
+ a triple of linear projections, with shape constraints on the weights which
780
+ ensure embedding dimension uniformity in the projected outputs.
781
+ Output is a triple containing projection tensors for query, key and value.
782
+ Args:
783
+ q, k, v: query, key and value tensors to be projected.
784
+ w_q, w_k, w_v: weights for q, k and v, respectively.
785
+ b_q, b_k, b_v: optional biases for q, k and v, respectively.
786
+ Shape:
787
+ Inputs:
788
+ - q: :math:`(Qdims..., Eq)` where Eq is the query embedding dimension and Qdims are any
789
+ number of leading dimensions.
790
+ - k: :math:`(Kdims..., Ek)` where Ek is the key embedding dimension and Kdims are any
791
+ number of leading dimensions.
792
+ - v: :math:`(Vdims..., Ev)` where Ev is the value embedding dimension and Vdims are any
793
+ number of leading dimensions.
794
+ - w_q: :math:`(Eq, Eq)`
795
+ - w_k: :math:`(Eq, Ek)`
796
+ - w_v: :math:`(Eq, Ev)`
797
+ - b_q: :math:`(Eq)`
798
+ - b_k: :math:`(Eq)`
799
+ - b_v: :math:`(Eq)`
800
+ Output: in output triple :math:`(q', k', v')`,
801
+ - q': :math:`[Qdims..., Eq]`
802
+ - k': :math:`[Kdims..., Eq]`
803
+ - v': :math:`[Vdims..., Eq]`
804
+ """
805
+ Eq, Ek, Ev = q.size(-1), k.size(-1), v.size(-1)
806
+ assert w_q.shape == (Eq, Eq), f"expecting query weights shape of {(Eq, Eq)}, but got {w_q.shape}"
807
+ assert w_k.shape == (Eq, Ek), f"expecting key weights shape of {(Eq, Ek)}, but got {w_k.shape}"
808
+ assert w_v.shape == (Eq, Ev), f"expecting value weights shape of {(Eq, Ev)}, but got {w_v.shape}"
809
+ assert b_q is None or b_q.shape == (Eq,), f"expecting query bias shape of {(Eq,)}, but got {b_q.shape}"
810
+ assert b_k is None or b_k.shape == (Eq,), f"expecting key bias shape of {(Eq,)}, but got {b_k.shape}"
811
+ assert b_v is None or b_v.shape == (Eq,), f"expecting value bias shape of {(Eq,)}, but got {b_v.shape}"
812
+ return linear(q, w_q, b_q), linear(k, w_k, b_k), linear(v, w_v, b_v)
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|begin_of_text|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|end_of_text|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "!",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenization_minicpmv_fast.py ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+
3
+ from transformers.tokenization_utils_fast import PreTrainedTokenizerFast
4
+
5
+
6
+ class MiniCPMVTokenizerFast(PreTrainedTokenizerFast):
7
+ def __init__(self, **kwargs):
8
+ super().__init__(**kwargs)
9
+ self.eot_token = "<|eot_id|>"
10
+ self.im_start = ""
12
+ self.ref_start = "<ref>"
13
+ self.ref_end = "</ref>"
14
+ self.box_start = "<box>"
15
+ self.box_end = "</box>"
16
+ self.quad_start = "<quad>"
17
+ self.quad_end = "</quad>"
18
+ self.slice_start = "<slice>"
19
+ self.slice_end = "</slice>"
20
+
21
+ @property
22
+ def eos_id(self):
23
+ return self.eos_token_id
24
+
25
+ @property
26
+ def bos_id(self):
27
+ return self.bos_token_id
28
+
29
+ @property
30
+ def unk_id(self):
31
+ return self.unk_token_id
32
+
33
+ @property
34
+ def eot_id(self):
35
+ return self.convert_tokens_to_ids(self.eot_token)
36
+
37
+ @property
38
+ def im_start_id(self):
39
+ return self.convert_tokens_to_ids(self.im_start)
40
+
41
+ @property
42
+ def im_end_id(self):
43
+ return self.convert_tokens_to_ids(self.im_end)
44
+
45
+ @staticmethod
46
+ def escape(text: str) -> str:
47
+ return text
48
+
49
+ @staticmethod
50
+ def unescape(text: str) -> str:
51
+ return text
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,2072 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "128000": {
4
+ "content": "<|begin_of_text|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "128001": {
12
+ "content": "<|end_of_text|>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "128002": {
20
+ "content": "<unk>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "128003": {
28
+ "content": "<|reserved_special_token_1|>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "128004": {
36
+ "content": "<|reserved_special_token_2|>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "128005": {
44
+ "content": "<|reserved_special_token_3|>",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ },
51
+ "128006": {
52
+ "content": "<|start_header_id|>",
53
+ "lstrip": false,
54
+ "normalized": false,
55
+ "rstrip": false,
56
+ "single_word": false,
57
+ "special": true
58
+ },
59
+ "128007": {
60
+ "content": "<|end_header_id|>",
61
+ "lstrip": false,
62
+ "normalized": false,
63
+ "rstrip": false,
64
+ "single_word": false,
65
+ "special": true
66
+ },
67
+ "128008": {
68
+ "content": "<|reserved_special_token_4|>",
69
+ "lstrip": false,
70
+ "normalized": false,
71
+ "rstrip": false,
72
+ "single_word": false,
73
+ "special": true
74
+ },
75
+ "128009": {
76
+ "content": "<|eot_id|>",
77
+ "lstrip": false,
78
+ "normalized": false,
79
+ "rstrip": false,
80
+ "single_word": false,
81
+ "special": true
82
+ },
83
+ "128010": {
84
+ "content": "",
93
+ "lstrip": false,
94
+ "normalized": false,
95
+ "rstrip": false,
96
+ "single_word": false,
97
+ "special": true
98
+ },
99
+ "128012": {
100
+ "content": "<ref>",
101
+ "lstrip": false,
102
+ "normalized": false,
103
+ "rstrip": false,
104
+ "single_word": false,
105
+ "special": true
106
+ },
107
+ "128013": {
108
+ "content": "</ref>",
109
+ "lstrip": false,
110
+ "normalized": false,
111
+ "rstrip": false,
112
+ "single_word": false,
113
+ "special": true
114
+ },
115
+ "128014": {
116
+ "content": "<box>",
117
+ "lstrip": false,
118
+ "normalized": false,
119
+ "rstrip": false,
120
+ "single_word": false,
121
+ "special": true
122
+ },
123
+ "128015": {
124
+ "content": "</box>",
125
+ "lstrip": false,
126
+ "normalized": false,
127
+ "rstrip": false,
128
+ "single_word": false,
129
+ "special": true
130
+ },
131
+ "128016": {
132
+ "content": "<quad>",
133
+ "lstrip": false,
134
+ "normalized": false,
135
+ "rstrip": false,
136
+ "single_word": false,
137
+ "special": true
138
+ },
139
+ "128017": {
140
+ "content": "</quad>",
141
+ "lstrip": false,
142
+ "normalized": false,
143
+ "rstrip": false,
144
+ "single_word": false,
145
+ "special": true
146
+ },
147
+ "128018": {
148
+ "content": "<point>",
149
+ "lstrip": false,
150
+ "normalized": false,
151
+ "rstrip": false,
152
+ "single_word": false,
153
+ "special": true
154
+ },
155
+ "128019": {
156
+ "content": "</point>",
157
+ "lstrip": false,
158
+ "normalized": false,
159
+ "rstrip": false,
160
+ "single_word": false,
161
+ "special": true
162
+ },
163
+ "128020": {
164
+ "content": "<slice>",
165
+ "lstrip": false,
166
+ "normalized": false,
167
+ "rstrip": false,
168
+ "single_word": false,
169
+ "special": true
170
+ },
171
+ "128021": {
172
+ "content": "</slice>",
173
+ "lstrip": false,
174
+ "normalized": false,
175
+ "rstrip": false,
176
+ "single_word": false,
177
+ "special": true
178
+ },
179
+ "128022": {
180
+ "content": "<|reserved_special_token_17|>",
181
+ "lstrip": false,
182
+ "normalized": false,
183
+ "rstrip": false,
184
+ "single_word": false,
185
+ "special": true
186
+ },
187
+ "128023": {
188
+ "content": "<|reserved_special_token_18|>",
189
+ "lstrip": false,
190
+ "normalized": false,
191
+ "rstrip": false,
192
+ "single_word": false,
193
+ "special": true
194
+ },
195
+ "128024": {
196
+ "content": "<|reserved_special_token_19|>",
197
+ "lstrip": false,
198
+ "normalized": false,
199
+ "rstrip": false,
200
+ "single_word": false,
201
+ "special": true
202
+ },
203
+ "128025": {
204
+ "content": "<|reserved_special_token_20|>",
205
+ "lstrip": false,
206
+ "normalized": false,
207
+ "rstrip": false,
208
+ "single_word": false,
209
+ "special": true
210
+ },
211
+ "128026": {
212
+ "content": "<|reserved_special_token_21|>",
213
+ "lstrip": false,
214
+ "normalized": false,
215
+ "rstrip": false,
216
+ "single_word": false,
217
+ "special": true
218
+ },
219
+ "128027": {
220
+ "content": "<|reserved_special_token_22|>",
221
+ "lstrip": false,
222
+ "normalized": false,
223
+ "rstrip": false,
224
+ "single_word": false,
225
+ "special": true
226
+ },
227
+ "128028": {
228
+ "content": "<|reserved_special_token_23|>",
229
+ "lstrip": false,
230
+ "normalized": false,
231
+ "rstrip": false,
232
+ "single_word": false,
233
+ "special": true
234
+ },
235
+ "128029": {
236
+ "content": "<|reserved_special_token_24|>",
237
+ "lstrip": false,
238
+ "normalized": false,
239
+ "rstrip": false,
240
+ "single_word": false,
241
+ "special": true
242
+ },
243
+ "128030": {
244
+ "content": "<|reserved_special_token_25|>",
245
+ "lstrip": false,
246
+ "normalized": false,
247
+ "rstrip": false,
248
+ "single_word": false,
249
+ "special": true
250
+ },
251
+ "128031": {
252
+ "content": "<|reserved_special_token_26|>",
253
+ "lstrip": false,
254
+ "normalized": false,
255
+ "rstrip": false,
256
+ "single_word": false,
257
+ "special": true
258
+ },
259
+ "128032": {
260
+ "content": "<|reserved_special_token_27|>",
261
+ "lstrip": false,
262
+ "normalized": false,
263
+ "rstrip": false,
264
+ "single_word": false,
265
+ "special": true
266
+ },
267
+ "128033": {
268
+ "content": "<|reserved_special_token_28|>",
269
+ "lstrip": false,
270
+ "normalized": false,
271
+ "rstrip": false,
272
+ "single_word": false,
273
+ "special": true
274
+ },
275
+ "128034": {
276
+ "content": "<|reserved_special_token_29|>",
277
+ "lstrip": false,
278
+ "normalized": false,
279
+ "rstrip": false,
280
+ "single_word": false,
281
+ "special": true
282
+ },
283
+ "128035": {
284
+ "content": "<|reserved_special_token_30|>",
285
+ "lstrip": false,
286
+ "normalized": false,
287
+ "rstrip": false,
288
+ "single_word": false,
289
+ "special": true
290
+ },
291
+ "128036": {
292
+ "content": "<|reserved_special_token_31|>",
293
+ "lstrip": false,
294
+ "normalized": false,
295
+ "rstrip": false,
296
+ "single_word": false,
297
+ "special": true
298
+ },
299
+ "128037": {
300
+ "content": "<|reserved_special_token_32|>",
301
+ "lstrip": false,
302
+ "normalized": false,
303
+ "rstrip": false,
304
+ "single_word": false,
305
+ "special": true
306
+ },
307
+ "128038": {
308
+ "content": "<|reserved_special_token_33|>",
309
+ "lstrip": false,
310
+ "normalized": false,
311
+ "rstrip": false,
312
+ "single_word": false,
313
+ "special": true
314
+ },
315
+ "128039": {
316
+ "content": "<|reserved_special_token_34|>",
317
+ "lstrip": false,
318
+ "normalized": false,
319
+ "rstrip": false,
320
+ "single_word": false,
321
+ "special": true
322
+ },
323
+ "128040": {
324
+ "content": "<|reserved_special_token_35|>",
325
+ "lstrip": false,
326
+ "normalized": false,
327
+ "rstrip": false,
328
+ "single_word": false,
329
+ "special": true
330
+ },
331
+ "128041": {
332
+ "content": "<|reserved_special_token_36|>",
333
+ "lstrip": false,
334
+ "normalized": false,
335
+ "rstrip": false,
336
+ "single_word": false,
337
+ "special": true
338
+ },
339
+ "128042": {
340
+ "content": "<|reserved_special_token_37|>",
341
+ "lstrip": false,
342
+ "normalized": false,
343
+ "rstrip": false,
344
+ "single_word": false,
345
+ "special": true
346
+ },
347
+ "128043": {
348
+ "content": "<|reserved_special_token_38|>",
349
+ "lstrip": false,
350
+ "normalized": false,
351
+ "rstrip": false,
352
+ "single_word": false,
353
+ "special": true
354
+ },
355
+ "128044": {
356
+ "content": "<|reserved_special_token_39|>",
357
+ "lstrip": false,
358
+ "normalized": false,
359
+ "rstrip": false,
360
+ "single_word": false,
361
+ "special": true
362
+ },
363
+ "128045": {
364
+ "content": "<|reserved_special_token_40|>",
365
+ "lstrip": false,
366
+ "normalized": false,
367
+ "rstrip": false,
368
+ "single_word": false,
369
+ "special": true
370
+ },
371
+ "128046": {
372
+ "content": "<|reserved_special_token_41|>",
373
+ "lstrip": false,
374
+ "normalized": false,
375
+ "rstrip": false,
376
+ "single_word": false,
377
+ "special": true
378
+ },
379
+ "128047": {
380
+ "content": "<|reserved_special_token_42|>",
381
+ "lstrip": false,
382
+ "normalized": false,
383
+ "rstrip": false,
384
+ "single_word": false,
385
+ "special": true
386
+ },
387
+ "128048": {
388
+ "content": "<|reserved_special_token_43|>",
389
+ "lstrip": false,
390
+ "normalized": false,
391
+ "rstrip": false,
392
+ "single_word": false,
393
+ "special": true
394
+ },
395
+ "128049": {
396
+ "content": "<|reserved_special_token_44|>",
397
+ "lstrip": false,
398
+ "normalized": false,
399
+ "rstrip": false,
400
+ "single_word": false,
401
+ "special": true
402
+ },
403
+ "128050": {
404
+ "content": "<|reserved_special_token_45|>",
405
+ "lstrip": false,
406
+ "normalized": false,
407
+ "rstrip": false,
408
+ "single_word": false,
409
+ "special": true
410
+ },
411
+ "128051": {
412
+ "content": "<|reserved_special_token_46|>",
413
+ "lstrip": false,
414
+ "normalized": false,
415
+ "rstrip": false,
416
+ "single_word": false,
417
+ "special": true
418
+ },
419
+ "128052": {
420
+ "content": "<|reserved_special_token_47|>",
421
+ "lstrip": false,
422
+ "normalized": false,
423
+ "rstrip": false,
424
+ "single_word": false,
425
+ "special": true
426
+ },
427
+ "128053": {
428
+ "content": "<|reserved_special_token_48|>",
429
+ "lstrip": false,
430
+ "normalized": false,
431
+ "rstrip": false,
432
+ "single_word": false,
433
+ "special": true
434
+ },
435
+ "128054": {
436
+ "content": "<|reserved_special_token_49|>",
437
+ "lstrip": false,
438
+ "normalized": false,
439
+ "rstrip": false,
440
+ "single_word": false,
441
+ "special": true
442
+ },
443
+ "128055": {
444
+ "content": "<|reserved_special_token_50|>",
445
+ "lstrip": false,
446
+ "normalized": false,
447
+ "rstrip": false,
448
+ "single_word": false,
449
+ "special": true
450
+ },
451
+ "128056": {
452
+ "content": "<|reserved_special_token_51|>",
453
+ "lstrip": false,
454
+ "normalized": false,
455
+ "rstrip": false,
456
+ "single_word": false,
457
+ "special": true
458
+ },
459
+ "128057": {
460
+ "content": "<|reserved_special_token_52|>",
461
+ "lstrip": false,
462
+ "normalized": false,
463
+ "rstrip": false,
464
+ "single_word": false,
465
+ "special": true
466
+ },
467
+ "128058": {
468
+ "content": "<|reserved_special_token_53|>",
469
+ "lstrip": false,
470
+ "normalized": false,
471
+ "rstrip": false,
472
+ "single_word": false,
473
+ "special": true
474
+ },
475
+ "128059": {
476
+ "content": "<|reserved_special_token_54|>",
477
+ "lstrip": false,
478
+ "normalized": false,
479
+ "rstrip": false,
480
+ "single_word": false,
481
+ "special": true
482
+ },
483
+ "128060": {
484
+ "content": "<|reserved_special_token_55|>",
485
+ "lstrip": false,
486
+ "normalized": false,
487
+ "rstrip": false,
488
+ "single_word": false,
489
+ "special": true
490
+ },
491
+ "128061": {
492
+ "content": "<|reserved_special_token_56|>",
493
+ "lstrip": false,
494
+ "normalized": false,
495
+ "rstrip": false,
496
+ "single_word": false,
497
+ "special": true
498
+ },
499
+ "128062": {
500
+ "content": "<|reserved_special_token_57|>",
501
+ "lstrip": false,
502
+ "normalized": false,
503
+ "rstrip": false,
504
+ "single_word": false,
505
+ "special": true
506
+ },
507
+ "128063": {
508
+ "content": "<|reserved_special_token_58|>",
509
+ "lstrip": false,
510
+ "normalized": false,
511
+ "rstrip": false,
512
+ "single_word": false,
513
+ "special": true
514
+ },
515
+ "128064": {
516
+ "content": "<|reserved_special_token_59|>",
517
+ "lstrip": false,
518
+ "normalized": false,
519
+ "rstrip": false,
520
+ "single_word": false,
521
+ "special": true
522
+ },
523
+ "128065": {
524
+ "content": "<|reserved_special_token_60|>",
525
+ "lstrip": false,
526
+ "normalized": false,
527
+ "rstrip": false,
528
+ "single_word": false,
529
+ "special": true
530
+ },
531
+ "128066": {
532
+ "content": "<|reserved_special_token_61|>",
533
+ "lstrip": false,
534
+ "normalized": false,
535
+ "rstrip": false,
536
+ "single_word": false,
537
+ "special": true
538
+ },
539
+ "128067": {
540
+ "content": "<|reserved_special_token_62|>",
541
+ "lstrip": false,
542
+ "normalized": false,
543
+ "rstrip": false,
544
+ "single_word": false,
545
+ "special": true
546
+ },
547
+ "128068": {
548
+ "content": "<|reserved_special_token_63|>",
549
+ "lstrip": false,
550
+ "normalized": false,
551
+ "rstrip": false,
552
+ "single_word": false,
553
+ "special": true
554
+ },
555
+ "128069": {
556
+ "content": "<|reserved_special_token_64|>",
557
+ "lstrip": false,
558
+ "normalized": false,
559
+ "rstrip": false,
560
+ "single_word": false,
561
+ "special": true
562
+ },
563
+ "128070": {
564
+ "content": "<|reserved_special_token_65|>",
565
+ "lstrip": false,
566
+ "normalized": false,
567
+ "rstrip": false,
568
+ "single_word": false,
569
+ "special": true
570
+ },
571
+ "128071": {
572
+ "content": "<|reserved_special_token_66|>",
573
+ "lstrip": false,
574
+ "normalized": false,
575
+ "rstrip": false,
576
+ "single_word": false,
577
+ "special": true
578
+ },
579
+ "128072": {
580
+ "content": "<|reserved_special_token_67|>",
581
+ "lstrip": false,
582
+ "normalized": false,
583
+ "rstrip": false,
584
+ "single_word": false,
585
+ "special": true
586
+ },
587
+ "128073": {
588
+ "content": "<|reserved_special_token_68|>",
589
+ "lstrip": false,
590
+ "normalized": false,
591
+ "rstrip": false,
592
+ "single_word": false,
593
+ "special": true
594
+ },
595
+ "128074": {
596
+ "content": "<|reserved_special_token_69|>",
597
+ "lstrip": false,
598
+ "normalized": false,
599
+ "rstrip": false,
600
+ "single_word": false,
601
+ "special": true
602
+ },
603
+ "128075": {
604
+ "content": "<|reserved_special_token_70|>",
605
+ "lstrip": false,
606
+ "normalized": false,
607
+ "rstrip": false,
608
+ "single_word": false,
609
+ "special": true
610
+ },
611
+ "128076": {
612
+ "content": "<|reserved_special_token_71|>",
613
+ "lstrip": false,
614
+ "normalized": false,
615
+ "rstrip": false,
616
+ "single_word": false,
617
+ "special": true
618
+ },
619
+ "128077": {
620
+ "content": "<|reserved_special_token_72|>",
621
+ "lstrip": false,
622
+ "normalized": false,
623
+ "rstrip": false,
624
+ "single_word": false,
625
+ "special": true
626
+ },
627
+ "128078": {
628
+ "content": "<|reserved_special_token_73|>",
629
+ "lstrip": false,
630
+ "normalized": false,
631
+ "rstrip": false,
632
+ "single_word": false,
633
+ "special": true
634
+ },
635
+ "128079": {
636
+ "content": "<|reserved_special_token_74|>",
637
+ "lstrip": false,
638
+ "normalized": false,
639
+ "rstrip": false,
640
+ "single_word": false,
641
+ "special": true
642
+ },
643
+ "128080": {
644
+ "content": "<|reserved_special_token_75|>",
645
+ "lstrip": false,
646
+ "normalized": false,
647
+ "rstrip": false,
648
+ "single_word": false,
649
+ "special": true
650
+ },
651
+ "128081": {
652
+ "content": "<|reserved_special_token_76|>",
653
+ "lstrip": false,
654
+ "normalized": false,
655
+ "rstrip": false,
656
+ "single_word": false,
657
+ "special": true
658
+ },
659
+ "128082": {
660
+ "content": "<|reserved_special_token_77|>",
661
+ "lstrip": false,
662
+ "normalized": false,
663
+ "rstrip": false,
664
+ "single_word": false,
665
+ "special": true
666
+ },
667
+ "128083": {
668
+ "content": "<|reserved_special_token_78|>",
669
+ "lstrip": false,
670
+ "normalized": false,
671
+ "rstrip": false,
672
+ "single_word": false,
673
+ "special": true
674
+ },
675
+ "128084": {
676
+ "content": "<|reserved_special_token_79|>",
677
+ "lstrip": false,
678
+ "normalized": false,
679
+ "rstrip": false,
680
+ "single_word": false,
681
+ "special": true
682
+ },
683
+ "128085": {
684
+ "content": "<|reserved_special_token_80|>",
685
+ "lstrip": false,
686
+ "normalized": false,
687
+ "rstrip": false,
688
+ "single_word": false,
689
+ "special": true
690
+ },
691
+ "128086": {
692
+ "content": "<|reserved_special_token_81|>",
693
+ "lstrip": false,
694
+ "normalized": false,
695
+ "rstrip": false,
696
+ "single_word": false,
697
+ "special": true
698
+ },
699
+ "128087": {
700
+ "content": "<|reserved_special_token_82|>",
701
+ "lstrip": false,
702
+ "normalized": false,
703
+ "rstrip": false,
704
+ "single_word": false,
705
+ "special": true
706
+ },
707
+ "128088": {
708
+ "content": "<|reserved_special_token_83|>",
709
+ "lstrip": false,
710
+ "normalized": false,
711
+ "rstrip": false,
712
+ "single_word": false,
713
+ "special": true
714
+ },
715
+ "128089": {
716
+ "content": "<|reserved_special_token_84|>",
717
+ "lstrip": false,
718
+ "normalized": false,
719
+ "rstrip": false,
720
+ "single_word": false,
721
+ "special": true
722
+ },
723
+ "128090": {
724
+ "content": "<|reserved_special_token_85|>",
725
+ "lstrip": false,
726
+ "normalized": false,
727
+ "rstrip": false,
728
+ "single_word": false,
729
+ "special": true
730
+ },
731
+ "128091": {
732
+ "content": "<|reserved_special_token_86|>",
733
+ "lstrip": false,
734
+ "normalized": false,
735
+ "rstrip": false,
736
+ "single_word": false,
737
+ "special": true
738
+ },
739
+ "128092": {
740
+ "content": "<|reserved_special_token_87|>",
741
+ "lstrip": false,
742
+ "normalized": false,
743
+ "rstrip": false,
744
+ "single_word": false,
745
+ "special": true
746
+ },
747
+ "128093": {
748
+ "content": "<|reserved_special_token_88|>",
749
+ "lstrip": false,
750
+ "normalized": false,
751
+ "rstrip": false,
752
+ "single_word": false,
753
+ "special": true
754
+ },
755
+ "128094": {
756
+ "content": "<|reserved_special_token_89|>",
757
+ "lstrip": false,
758
+ "normalized": false,
759
+ "rstrip": false,
760
+ "single_word": false,
761
+ "special": true
762
+ },
763
+ "128095": {
764
+ "content": "<|reserved_special_token_90|>",
765
+ "lstrip": false,
766
+ "normalized": false,
767
+ "rstrip": false,
768
+ "single_word": false,
769
+ "special": true
770
+ },
771
+ "128096": {
772
+ "content": "<|reserved_special_token_91|>",
773
+ "lstrip": false,
774
+ "normalized": false,
775
+ "rstrip": false,
776
+ "single_word": false,
777
+ "special": true
778
+ },
779
+ "128097": {
780
+ "content": "<|reserved_special_token_92|>",
781
+ "lstrip": false,
782
+ "normalized": false,
783
+ "rstrip": false,
784
+ "single_word": false,
785
+ "special": true
786
+ },
787
+ "128098": {
788
+ "content": "<|reserved_special_token_93|>",
789
+ "lstrip": false,
790
+ "normalized": false,
791
+ "rstrip": false,
792
+ "single_word": false,
793
+ "special": true
794
+ },
795
+ "128099": {
796
+ "content": "<|reserved_special_token_94|>",
797
+ "lstrip": false,
798
+ "normalized": false,
799
+ "rstrip": false,
800
+ "single_word": false,
801
+ "special": true
802
+ },
803
+ "128100": {
804
+ "content": "<|reserved_special_token_95|>",
805
+ "lstrip": false,
806
+ "normalized": false,
807
+ "rstrip": false,
808
+ "single_word": false,
809
+ "special": true
810
+ },
811
+ "128101": {
812
+ "content": "<|reserved_special_token_96|>",
813
+ "lstrip": false,
814
+ "normalized": false,
815
+ "rstrip": false,
816
+ "single_word": false,
817
+ "special": true
818
+ },
819
+ "128102": {
820
+ "content": "<|reserved_special_token_97|>",
821
+ "lstrip": false,
822
+ "normalized": false,
823
+ "rstrip": false,
824
+ "single_word": false,
825
+ "special": true
826
+ },
827
+ "128103": {
828
+ "content": "<|reserved_special_token_98|>",
829
+ "lstrip": false,
830
+ "normalized": false,
831
+ "rstrip": false,
832
+ "single_word": false,
833
+ "special": true
834
+ },
835
+ "128104": {
836
+ "content": "<|reserved_special_token_99|>",
837
+ "lstrip": false,
838
+ "normalized": false,
839
+ "rstrip": false,
840
+ "single_word": false,
841
+ "special": true
842
+ },
843
+ "128105": {
844
+ "content": "<|reserved_special_token_100|>",
845
+ "lstrip": false,
846
+ "normalized": false,
847
+ "rstrip": false,
848
+ "single_word": false,
849
+ "special": true
850
+ },
851
+ "128106": {
852
+ "content": "<|reserved_special_token_101|>",
853
+ "lstrip": false,
854
+ "normalized": false,
855
+ "rstrip": false,
856
+ "single_word": false,
857
+ "special": true
858
+ },
859
+ "128107": {
860
+ "content": "<|reserved_special_token_102|>",
861
+ "lstrip": false,
862
+ "normalized": false,
863
+ "rstrip": false,
864
+ "single_word": false,
865
+ "special": true
866
+ },
867
+ "128108": {
868
+ "content": "<|reserved_special_token_103|>",
869
+ "lstrip": false,
870
+ "normalized": false,
871
+ "rstrip": false,
872
+ "single_word": false,
873
+ "special": true
874
+ },
875
+ "128109": {
876
+ "content": "<|reserved_special_token_104|>",
877
+ "lstrip": false,
878
+ "normalized": false,
879
+ "rstrip": false,
880
+ "single_word": false,
881
+ "special": true
882
+ },
883
+ "128110": {
884
+ "content": "<|reserved_special_token_105|>",
885
+ "lstrip": false,
886
+ "normalized": false,
887
+ "rstrip": false,
888
+ "single_word": false,
889
+ "special": true
890
+ },
891
+ "128111": {
892
+ "content": "<|reserved_special_token_106|>",
893
+ "lstrip": false,
894
+ "normalized": false,
895
+ "rstrip": false,
896
+ "single_word": false,
897
+ "special": true
898
+ },
899
+ "128112": {
900
+ "content": "<|reserved_special_token_107|>",
901
+ "lstrip": false,
902
+ "normalized": false,
903
+ "rstrip": false,
904
+ "single_word": false,
905
+ "special": true
906
+ },
907
+ "128113": {
908
+ "content": "<|reserved_special_token_108|>",
909
+ "lstrip": false,
910
+ "normalized": false,
911
+ "rstrip": false,
912
+ "single_word": false,
913
+ "special": true
914
+ },
915
+ "128114": {
916
+ "content": "<|reserved_special_token_109|>",
917
+ "lstrip": false,
918
+ "normalized": false,
919
+ "rstrip": false,
920
+ "single_word": false,
921
+ "special": true
922
+ },
923
+ "128115": {
924
+ "content": "<|reserved_special_token_110|>",
925
+ "lstrip": false,
926
+ "normalized": false,
927
+ "rstrip": false,
928
+ "single_word": false,
929
+ "special": true
930
+ },
931
+ "128116": {
932
+ "content": "<|reserved_special_token_111|>",
933
+ "lstrip": false,
934
+ "normalized": false,
935
+ "rstrip": false,
936
+ "single_word": false,
937
+ "special": true
938
+ },
939
+ "128117": {
940
+ "content": "<|reserved_special_token_112|>",
941
+ "lstrip": false,
942
+ "normalized": false,
943
+ "rstrip": false,
944
+ "single_word": false,
945
+ "special": true
946
+ },
947
+ "128118": {
948
+ "content": "<|reserved_special_token_113|>",
949
+ "lstrip": false,
950
+ "normalized": false,
951
+ "rstrip": false,
952
+ "single_word": false,
953
+ "special": true
954
+ },
955
+ "128119": {
956
+ "content": "<|reserved_special_token_114|>",
957
+ "lstrip": false,
958
+ "normalized": false,
959
+ "rstrip": false,
960
+ "single_word": false,
961
+ "special": true
962
+ },
963
+ "128120": {
964
+ "content": "<|reserved_special_token_115|>",
965
+ "lstrip": false,
966
+ "normalized": false,
967
+ "rstrip": false,
968
+ "single_word": false,
969
+ "special": true
970
+ },
971
+ "128121": {
972
+ "content": "<|reserved_special_token_116|>",
973
+ "lstrip": false,
974
+ "normalized": false,
975
+ "rstrip": false,
976
+ "single_word": false,
977
+ "special": true
978
+ },
979
+ "128122": {
980
+ "content": "<|reserved_special_token_117|>",
981
+ "lstrip": false,
982
+ "normalized": false,
983
+ "rstrip": false,
984
+ "single_word": false,
985
+ "special": true
986
+ },
987
+ "128123": {
988
+ "content": "<|reserved_special_token_118|>",
989
+ "lstrip": false,
990
+ "normalized": false,
991
+ "rstrip": false,
992
+ "single_word": false,
993
+ "special": true
994
+ },
995
+ "128124": {
996
+ "content": "<|reserved_special_token_119|>",
997
+ "lstrip": false,
998
+ "normalized": false,
999
+ "rstrip": false,
1000
+ "single_word": false,
1001
+ "special": true
1002
+ },
1003
+ "128125": {
1004
+ "content": "<|reserved_special_token_120|>",
1005
+ "lstrip": false,
1006
+ "normalized": false,
1007
+ "rstrip": false,
1008
+ "single_word": false,
1009
+ "special": true
1010
+ },
1011
+ "128126": {
1012
+ "content": "<|reserved_special_token_121|>",
1013
+ "lstrip": false,
1014
+ "normalized": false,
1015
+ "rstrip": false,
1016
+ "single_word": false,
1017
+ "special": true
1018
+ },
1019
+ "128127": {
1020
+ "content": "<|reserved_special_token_122|>",
1021
+ "lstrip": false,
1022
+ "normalized": false,
1023
+ "rstrip": false,
1024
+ "single_word": false,
1025
+ "special": true
1026
+ },
1027
+ "128128": {
1028
+ "content": "<|reserved_special_token_123|>",
1029
+ "lstrip": false,
1030
+ "normalized": false,
1031
+ "rstrip": false,
1032
+ "single_word": false,
1033
+ "special": true
1034
+ },
1035
+ "128129": {
1036
+ "content": "<|reserved_special_token_124|>",
1037
+ "lstrip": false,
1038
+ "normalized": false,
1039
+ "rstrip": false,
1040
+ "single_word": false,
1041
+ "special": true
1042
+ },
1043
+ "128130": {
1044
+ "content": "<|reserved_special_token_125|>",
1045
+ "lstrip": false,
1046
+ "normalized": false,
1047
+ "rstrip": false,
1048
+ "single_word": false,
1049
+ "special": true
1050
+ },
1051
+ "128131": {
1052
+ "content": "<|reserved_special_token_126|>",
1053
+ "lstrip": false,
1054
+ "normalized": false,
1055
+ "rstrip": false,
1056
+ "single_word": false,
1057
+ "special": true
1058
+ },
1059
+ "128132": {
1060
+ "content": "<|reserved_special_token_127|>",
1061
+ "lstrip": false,
1062
+ "normalized": false,
1063
+ "rstrip": false,
1064
+ "single_word": false,
1065
+ "special": true
1066
+ },
1067
+ "128133": {
1068
+ "content": "<|reserved_special_token_128|>",
1069
+ "lstrip": false,
1070
+ "normalized": false,
1071
+ "rstrip": false,
1072
+ "single_word": false,
1073
+ "special": true
1074
+ },
1075
+ "128134": {
1076
+ "content": "<|reserved_special_token_129|>",
1077
+ "lstrip": false,
1078
+ "normalized": false,
1079
+ "rstrip": false,
1080
+ "single_word": false,
1081
+ "special": true
1082
+ },
1083
+ "128135": {
1084
+ "content": "<|reserved_special_token_130|>",
1085
+ "lstrip": false,
1086
+ "normalized": false,
1087
+ "rstrip": false,
1088
+ "single_word": false,
1089
+ "special": true
1090
+ },
1091
+ "128136": {
1092
+ "content": "<|reserved_special_token_131|>",
1093
+ "lstrip": false,
1094
+ "normalized": false,
1095
+ "rstrip": false,
1096
+ "single_word": false,
1097
+ "special": true
1098
+ },
1099
+ "128137": {
1100
+ "content": "<|reserved_special_token_132|>",
1101
+ "lstrip": false,
1102
+ "normalized": false,
1103
+ "rstrip": false,
1104
+ "single_word": false,
1105
+ "special": true
1106
+ },
1107
+ "128138": {
1108
+ "content": "<|reserved_special_token_133|>",
1109
+ "lstrip": false,
1110
+ "normalized": false,
1111
+ "rstrip": false,
1112
+ "single_word": false,
1113
+ "special": true
1114
+ },
1115
+ "128139": {
1116
+ "content": "<|reserved_special_token_134|>",
1117
+ "lstrip": false,
1118
+ "normalized": false,
1119
+ "rstrip": false,
1120
+ "single_word": false,
1121
+ "special": true
1122
+ },
1123
+ "128140": {
1124
+ "content": "<|reserved_special_token_135|>",
1125
+ "lstrip": false,
1126
+ "normalized": false,
1127
+ "rstrip": false,
1128
+ "single_word": false,
1129
+ "special": true
1130
+ },
1131
+ "128141": {
1132
+ "content": "<|reserved_special_token_136|>",
1133
+ "lstrip": false,
1134
+ "normalized": false,
1135
+ "rstrip": false,
1136
+ "single_word": false,
1137
+ "special": true
1138
+ },
1139
+ "128142": {
1140
+ "content": "<|reserved_special_token_137|>",
1141
+ "lstrip": false,
1142
+ "normalized": false,
1143
+ "rstrip": false,
1144
+ "single_word": false,
1145
+ "special": true
1146
+ },
1147
+ "128143": {
1148
+ "content": "<|reserved_special_token_138|>",
1149
+ "lstrip": false,
1150
+ "normalized": false,
1151
+ "rstrip": false,
1152
+ "single_word": false,
1153
+ "special": true
1154
+ },
1155
+ "128144": {
1156
+ "content": "<|reserved_special_token_139|>",
1157
+ "lstrip": false,
1158
+ "normalized": false,
1159
+ "rstrip": false,
1160
+ "single_word": false,
1161
+ "special": true
1162
+ },
1163
+ "128145": {
1164
+ "content": "<|reserved_special_token_140|>",
1165
+ "lstrip": false,
1166
+ "normalized": false,
1167
+ "rstrip": false,
1168
+ "single_word": false,
1169
+ "special": true
1170
+ },
1171
+ "128146": {
1172
+ "content": "<|reserved_special_token_141|>",
1173
+ "lstrip": false,
1174
+ "normalized": false,
1175
+ "rstrip": false,
1176
+ "single_word": false,
1177
+ "special": true
1178
+ },
1179
+ "128147": {
1180
+ "content": "<|reserved_special_token_142|>",
1181
+ "lstrip": false,
1182
+ "normalized": false,
1183
+ "rstrip": false,
1184
+ "single_word": false,
1185
+ "special": true
1186
+ },
1187
+ "128148": {
1188
+ "content": "<|reserved_special_token_143|>",
1189
+ "lstrip": false,
1190
+ "normalized": false,
1191
+ "rstrip": false,
1192
+ "single_word": false,
1193
+ "special": true
1194
+ },
1195
+ "128149": {
1196
+ "content": "<|reserved_special_token_144|>",
1197
+ "lstrip": false,
1198
+ "normalized": false,
1199
+ "rstrip": false,
1200
+ "single_word": false,
1201
+ "special": true
1202
+ },
1203
+ "128150": {
1204
+ "content": "<|reserved_special_token_145|>",
1205
+ "lstrip": false,
1206
+ "normalized": false,
1207
+ "rstrip": false,
1208
+ "single_word": false,
1209
+ "special": true
1210
+ },
1211
+ "128151": {
1212
+ "content": "<|reserved_special_token_146|>",
1213
+ "lstrip": false,
1214
+ "normalized": false,
1215
+ "rstrip": false,
1216
+ "single_word": false,
1217
+ "special": true
1218
+ },
1219
+ "128152": {
1220
+ "content": "<|reserved_special_token_147|>",
1221
+ "lstrip": false,
1222
+ "normalized": false,
1223
+ "rstrip": false,
1224
+ "single_word": false,
1225
+ "special": true
1226
+ },
1227
+ "128153": {
1228
+ "content": "<|reserved_special_token_148|>",
1229
+ "lstrip": false,
1230
+ "normalized": false,
1231
+ "rstrip": false,
1232
+ "single_word": false,
1233
+ "special": true
1234
+ },
1235
+ "128154": {
1236
+ "content": "<|reserved_special_token_149|>",
1237
+ "lstrip": false,
1238
+ "normalized": false,
1239
+ "rstrip": false,
1240
+ "single_word": false,
1241
+ "special": true
1242
+ },
1243
+ "128155": {
1244
+ "content": "<|reserved_special_token_150|>",
1245
+ "lstrip": false,
1246
+ "normalized": false,
1247
+ "rstrip": false,
1248
+ "single_word": false,
1249
+ "special": true
1250
+ },
1251
+ "128156": {
1252
+ "content": "<|reserved_special_token_151|>",
1253
+ "lstrip": false,
1254
+ "normalized": false,
1255
+ "rstrip": false,
1256
+ "single_word": false,
1257
+ "special": true
1258
+ },
1259
+ "128157": {
1260
+ "content": "<|reserved_special_token_152|>",
1261
+ "lstrip": false,
1262
+ "normalized": false,
1263
+ "rstrip": false,
1264
+ "single_word": false,
1265
+ "special": true
1266
+ },
1267
+ "128158": {
1268
+ "content": "<|reserved_special_token_153|>",
1269
+ "lstrip": false,
1270
+ "normalized": false,
1271
+ "rstrip": false,
1272
+ "single_word": false,
1273
+ "special": true
1274
+ },
1275
+ "128159": {
1276
+ "content": "<|reserved_special_token_154|>",
1277
+ "lstrip": false,
1278
+ "normalized": false,
1279
+ "rstrip": false,
1280
+ "single_word": false,
1281
+ "special": true
1282
+ },
1283
+ "128160": {
1284
+ "content": "<|reserved_special_token_155|>",
1285
+ "lstrip": false,
1286
+ "normalized": false,
1287
+ "rstrip": false,
1288
+ "single_word": false,
1289
+ "special": true
1290
+ },
1291
+ "128161": {
1292
+ "content": "<|reserved_special_token_156|>",
1293
+ "lstrip": false,
1294
+ "normalized": false,
1295
+ "rstrip": false,
1296
+ "single_word": false,
1297
+ "special": true
1298
+ },
1299
+ "128162": {
1300
+ "content": "<|reserved_special_token_157|>",
1301
+ "lstrip": false,
1302
+ "normalized": false,
1303
+ "rstrip": false,
1304
+ "single_word": false,
1305
+ "special": true
1306
+ },
1307
+ "128163": {
1308
+ "content": "<|reserved_special_token_158|>",
1309
+ "lstrip": false,
1310
+ "normalized": false,
1311
+ "rstrip": false,
1312
+ "single_word": false,
1313
+ "special": true
1314
+ },
1315
+ "128164": {
1316
+ "content": "<|reserved_special_token_159|>",
1317
+ "lstrip": false,
1318
+ "normalized": false,
1319
+ "rstrip": false,
1320
+ "single_word": false,
1321
+ "special": true
1322
+ },
1323
+ "128165": {
1324
+ "content": "<|reserved_special_token_160|>",
1325
+ "lstrip": false,
1326
+ "normalized": false,
1327
+ "rstrip": false,
1328
+ "single_word": false,
1329
+ "special": true
1330
+ },
1331
+ "128166": {
1332
+ "content": "<|reserved_special_token_161|>",
1333
+ "lstrip": false,
1334
+ "normalized": false,
1335
+ "rstrip": false,
1336
+ "single_word": false,
1337
+ "special": true
1338
+ },
1339
+ "128167": {
1340
+ "content": "<|reserved_special_token_162|>",
1341
+ "lstrip": false,
1342
+ "normalized": false,
1343
+ "rstrip": false,
1344
+ "single_word": false,
1345
+ "special": true
1346
+ },
1347
+ "128168": {
1348
+ "content": "<|reserved_special_token_163|>",
1349
+ "lstrip": false,
1350
+ "normalized": false,
1351
+ "rstrip": false,
1352
+ "single_word": false,
1353
+ "special": true
1354
+ },
1355
+ "128169": {
1356
+ "content": "<|reserved_special_token_164|>",
1357
+ "lstrip": false,
1358
+ "normalized": false,
1359
+ "rstrip": false,
1360
+ "single_word": false,
1361
+ "special": true
1362
+ },
1363
+ "128170": {
1364
+ "content": "<|reserved_special_token_165|>",
1365
+ "lstrip": false,
1366
+ "normalized": false,
1367
+ "rstrip": false,
1368
+ "single_word": false,
1369
+ "special": true
1370
+ },
1371
+ "128171": {
1372
+ "content": "<|reserved_special_token_166|>",
1373
+ "lstrip": false,
1374
+ "normalized": false,
1375
+ "rstrip": false,
1376
+ "single_word": false,
1377
+ "special": true
1378
+ },
1379
+ "128172": {
1380
+ "content": "<|reserved_special_token_167|>",
1381
+ "lstrip": false,
1382
+ "normalized": false,
1383
+ "rstrip": false,
1384
+ "single_word": false,
1385
+ "special": true
1386
+ },
1387
+ "128173": {
1388
+ "content": "<|reserved_special_token_168|>",
1389
+ "lstrip": false,
1390
+ "normalized": false,
1391
+ "rstrip": false,
1392
+ "single_word": false,
1393
+ "special": true
1394
+ },
1395
+ "128174": {
1396
+ "content": "<|reserved_special_token_169|>",
1397
+ "lstrip": false,
1398
+ "normalized": false,
1399
+ "rstrip": false,
1400
+ "single_word": false,
1401
+ "special": true
1402
+ },
1403
+ "128175": {
1404
+ "content": "<|reserved_special_token_170|>",
1405
+ "lstrip": false,
1406
+ "normalized": false,
1407
+ "rstrip": false,
1408
+ "single_word": false,
1409
+ "special": true
1410
+ },
1411
+ "128176": {
1412
+ "content": "<|reserved_special_token_171|>",
1413
+ "lstrip": false,
1414
+ "normalized": false,
1415
+ "rstrip": false,
1416
+ "single_word": false,
1417
+ "special": true
1418
+ },
1419
+ "128177": {
1420
+ "content": "<|reserved_special_token_172|>",
1421
+ "lstrip": false,
1422
+ "normalized": false,
1423
+ "rstrip": false,
1424
+ "single_word": false,
1425
+ "special": true
1426
+ },
1427
+ "128178": {
1428
+ "content": "<|reserved_special_token_173|>",
1429
+ "lstrip": false,
1430
+ "normalized": false,
1431
+ "rstrip": false,
1432
+ "single_word": false,
1433
+ "special": true
1434
+ },
1435
+ "128179": {
1436
+ "content": "<|reserved_special_token_174|>",
1437
+ "lstrip": false,
1438
+ "normalized": false,
1439
+ "rstrip": false,
1440
+ "single_word": false,
1441
+ "special": true
1442
+ },
1443
+ "128180": {
1444
+ "content": "<|reserved_special_token_175|>",
1445
+ "lstrip": false,
1446
+ "normalized": false,
1447
+ "rstrip": false,
1448
+ "single_word": false,
1449
+ "special": true
1450
+ },
1451
+ "128181": {
1452
+ "content": "<|reserved_special_token_176|>",
1453
+ "lstrip": false,
1454
+ "normalized": false,
1455
+ "rstrip": false,
1456
+ "single_word": false,
1457
+ "special": true
1458
+ },
1459
+ "128182": {
1460
+ "content": "<|reserved_special_token_177|>",
1461
+ "lstrip": false,
1462
+ "normalized": false,
1463
+ "rstrip": false,
1464
+ "single_word": false,
1465
+ "special": true
1466
+ },
1467
+ "128183": {
1468
+ "content": "<|reserved_special_token_178|>",
1469
+ "lstrip": false,
1470
+ "normalized": false,
1471
+ "rstrip": false,
1472
+ "single_word": false,
1473
+ "special": true
1474
+ },
1475
+ "128184": {
1476
+ "content": "<|reserved_special_token_179|>",
1477
+ "lstrip": false,
1478
+ "normalized": false,
1479
+ "rstrip": false,
1480
+ "single_word": false,
1481
+ "special": true
1482
+ },
1483
+ "128185": {
1484
+ "content": "<|reserved_special_token_180|>",
1485
+ "lstrip": false,
1486
+ "normalized": false,
1487
+ "rstrip": false,
1488
+ "single_word": false,
1489
+ "special": true
1490
+ },
1491
+ "128186": {
1492
+ "content": "<|reserved_special_token_181|>",
1493
+ "lstrip": false,
1494
+ "normalized": false,
1495
+ "rstrip": false,
1496
+ "single_word": false,
1497
+ "special": true
1498
+ },
1499
+ "128187": {
1500
+ "content": "<|reserved_special_token_182|>",
1501
+ "lstrip": false,
1502
+ "normalized": false,
1503
+ "rstrip": false,
1504
+ "single_word": false,
1505
+ "special": true
1506
+ },
1507
+ "128188": {
1508
+ "content": "<|reserved_special_token_183|>",
1509
+ "lstrip": false,
1510
+ "normalized": false,
1511
+ "rstrip": false,
1512
+ "single_word": false,
1513
+ "special": true
1514
+ },
1515
+ "128189": {
1516
+ "content": "<|reserved_special_token_184|>",
1517
+ "lstrip": false,
1518
+ "normalized": false,
1519
+ "rstrip": false,
1520
+ "single_word": false,
1521
+ "special": true
1522
+ },
1523
+ "128190": {
1524
+ "content": "<|reserved_special_token_185|>",
1525
+ "lstrip": false,
1526
+ "normalized": false,
1527
+ "rstrip": false,
1528
+ "single_word": false,
1529
+ "special": true
1530
+ },
1531
+ "128191": {
1532
+ "content": "<|reserved_special_token_186|>",
1533
+ "lstrip": false,
1534
+ "normalized": false,
1535
+ "rstrip": false,
1536
+ "single_word": false,
1537
+ "special": true
1538
+ },
1539
+ "128192": {
1540
+ "content": "<|reserved_special_token_187|>",
1541
+ "lstrip": false,
1542
+ "normalized": false,
1543
+ "rstrip": false,
1544
+ "single_word": false,
1545
+ "special": true
1546
+ },
1547
+ "128193": {
1548
+ "content": "<|reserved_special_token_188|>",
1549
+ "lstrip": false,
1550
+ "normalized": false,
1551
+ "rstrip": false,
1552
+ "single_word": false,
1553
+ "special": true
1554
+ },
1555
+ "128194": {
1556
+ "content": "<|reserved_special_token_189|>",
1557
+ "lstrip": false,
1558
+ "normalized": false,
1559
+ "rstrip": false,
1560
+ "single_word": false,
1561
+ "special": true
1562
+ },
1563
+ "128195": {
1564
+ "content": "<|reserved_special_token_190|>",
1565
+ "lstrip": false,
1566
+ "normalized": false,
1567
+ "rstrip": false,
1568
+ "single_word": false,
1569
+ "special": true
1570
+ },
1571
+ "128196": {
1572
+ "content": "<|reserved_special_token_191|>",
1573
+ "lstrip": false,
1574
+ "normalized": false,
1575
+ "rstrip": false,
1576
+ "single_word": false,
1577
+ "special": true
1578
+ },
1579
+ "128197": {
1580
+ "content": "<|reserved_special_token_192|>",
1581
+ "lstrip": false,
1582
+ "normalized": false,
1583
+ "rstrip": false,
1584
+ "single_word": false,
1585
+ "special": true
1586
+ },
1587
+ "128198": {
1588
+ "content": "<|reserved_special_token_193|>",
1589
+ "lstrip": false,
1590
+ "normalized": false,
1591
+ "rstrip": false,
1592
+ "single_word": false,
1593
+ "special": true
1594
+ },
1595
+ "128199": {
1596
+ "content": "<|reserved_special_token_194|>",
1597
+ "lstrip": false,
1598
+ "normalized": false,
1599
+ "rstrip": false,
1600
+ "single_word": false,
1601
+ "special": true
1602
+ },
1603
+ "128200": {
1604
+ "content": "<|reserved_special_token_195|>",
1605
+ "lstrip": false,
1606
+ "normalized": false,
1607
+ "rstrip": false,
1608
+ "single_word": false,
1609
+ "special": true
1610
+ },
1611
+ "128201": {
1612
+ "content": "<|reserved_special_token_196|>",
1613
+ "lstrip": false,
1614
+ "normalized": false,
1615
+ "rstrip": false,
1616
+ "single_word": false,
1617
+ "special": true
1618
+ },
1619
+ "128202": {
1620
+ "content": "<|reserved_special_token_197|>",
1621
+ "lstrip": false,
1622
+ "normalized": false,
1623
+ "rstrip": false,
1624
+ "single_word": false,
1625
+ "special": true
1626
+ },
1627
+ "128203": {
1628
+ "content": "<|reserved_special_token_198|>",
1629
+ "lstrip": false,
1630
+ "normalized": false,
1631
+ "rstrip": false,
1632
+ "single_word": false,
1633
+ "special": true
1634
+ },
1635
+ "128204": {
1636
+ "content": "<|reserved_special_token_199|>",
1637
+ "lstrip": false,
1638
+ "normalized": false,
1639
+ "rstrip": false,
1640
+ "single_word": false,
1641
+ "special": true
1642
+ },
1643
+ "128205": {
1644
+ "content": "<|reserved_special_token_200|>",
1645
+ "lstrip": false,
1646
+ "normalized": false,
1647
+ "rstrip": false,
1648
+ "single_word": false,
1649
+ "special": true
1650
+ },
1651
+ "128206": {
1652
+ "content": "<|reserved_special_token_201|>",
1653
+ "lstrip": false,
1654
+ "normalized": false,
1655
+ "rstrip": false,
1656
+ "single_word": false,
1657
+ "special": true
1658
+ },
1659
+ "128207": {
1660
+ "content": "<|reserved_special_token_202|>",
1661
+ "lstrip": false,
1662
+ "normalized": false,
1663
+ "rstrip": false,
1664
+ "single_word": false,
1665
+ "special": true
1666
+ },
1667
+ "128208": {
1668
+ "content": "<|reserved_special_token_203|>",
1669
+ "lstrip": false,
1670
+ "normalized": false,
1671
+ "rstrip": false,
1672
+ "single_word": false,
1673
+ "special": true
1674
+ },
1675
+ "128209": {
1676
+ "content": "<|reserved_special_token_204|>",
1677
+ "lstrip": false,
1678
+ "normalized": false,
1679
+ "rstrip": false,
1680
+ "single_word": false,
1681
+ "special": true
1682
+ },
1683
+ "128210": {
1684
+ "content": "<|reserved_special_token_205|>",
1685
+ "lstrip": false,
1686
+ "normalized": false,
1687
+ "rstrip": false,
1688
+ "single_word": false,
1689
+ "special": true
1690
+ },
1691
+ "128211": {
1692
+ "content": "<|reserved_special_token_206|>",
1693
+ "lstrip": false,
1694
+ "normalized": false,
1695
+ "rstrip": false,
1696
+ "single_word": false,
1697
+ "special": true
1698
+ },
1699
+ "128212": {
1700
+ "content": "<|reserved_special_token_207|>",
1701
+ "lstrip": false,
1702
+ "normalized": false,
1703
+ "rstrip": false,
1704
+ "single_word": false,
1705
+ "special": true
1706
+ },
1707
+ "128213": {
1708
+ "content": "<|reserved_special_token_208|>",
1709
+ "lstrip": false,
1710
+ "normalized": false,
1711
+ "rstrip": false,
1712
+ "single_word": false,
1713
+ "special": true
1714
+ },
1715
+ "128214": {
1716
+ "content": "<|reserved_special_token_209|>",
1717
+ "lstrip": false,
1718
+ "normalized": false,
1719
+ "rstrip": false,
1720
+ "single_word": false,
1721
+ "special": true
1722
+ },
1723
+ "128215": {
1724
+ "content": "<|reserved_special_token_210|>",
1725
+ "lstrip": false,
1726
+ "normalized": false,
1727
+ "rstrip": false,
1728
+ "single_word": false,
1729
+ "special": true
1730
+ },
1731
+ "128216": {
1732
+ "content": "<|reserved_special_token_211|>",
1733
+ "lstrip": false,
1734
+ "normalized": false,
1735
+ "rstrip": false,
1736
+ "single_word": false,
1737
+ "special": true
1738
+ },
1739
+ "128217": {
1740
+ "content": "<|reserved_special_token_212|>",
1741
+ "lstrip": false,
1742
+ "normalized": false,
1743
+ "rstrip": false,
1744
+ "single_word": false,
1745
+ "special": true
1746
+ },
1747
+ "128218": {
1748
+ "content": "<|reserved_special_token_213|>",
1749
+ "lstrip": false,
1750
+ "normalized": false,
1751
+ "rstrip": false,
1752
+ "single_word": false,
1753
+ "special": true
1754
+ },
1755
+ "128219": {
1756
+ "content": "<|reserved_special_token_214|>",
1757
+ "lstrip": false,
1758
+ "normalized": false,
1759
+ "rstrip": false,
1760
+ "single_word": false,
1761
+ "special": true
1762
+ },
1763
+ "128220": {
1764
+ "content": "<|reserved_special_token_215|>",
1765
+ "lstrip": false,
1766
+ "normalized": false,
1767
+ "rstrip": false,
1768
+ "single_word": false,
1769
+ "special": true
1770
+ },
1771
+ "128221": {
1772
+ "content": "<|reserved_special_token_216|>",
1773
+ "lstrip": false,
1774
+ "normalized": false,
1775
+ "rstrip": false,
1776
+ "single_word": false,
1777
+ "special": true
1778
+ },
1779
+ "128222": {
1780
+ "content": "<|reserved_special_token_217|>",
1781
+ "lstrip": false,
1782
+ "normalized": false,
1783
+ "rstrip": false,
1784
+ "single_word": false,
1785
+ "special": true
1786
+ },
1787
+ "128223": {
1788
+ "content": "<|reserved_special_token_218|>",
1789
+ "lstrip": false,
1790
+ "normalized": false,
1791
+ "rstrip": false,
1792
+ "single_word": false,
1793
+ "special": true
1794
+ },
1795
+ "128224": {
1796
+ "content": "<|reserved_special_token_219|>",
1797
+ "lstrip": false,
1798
+ "normalized": false,
1799
+ "rstrip": false,
1800
+ "single_word": false,
1801
+ "special": true
1802
+ },
1803
+ "128225": {
1804
+ "content": "<|reserved_special_token_220|>",
1805
+ "lstrip": false,
1806
+ "normalized": false,
1807
+ "rstrip": false,
1808
+ "single_word": false,
1809
+ "special": true
1810
+ },
1811
+ "128226": {
1812
+ "content": "<|reserved_special_token_221|>",
1813
+ "lstrip": false,
1814
+ "normalized": false,
1815
+ "rstrip": false,
1816
+ "single_word": false,
1817
+ "special": true
1818
+ },
1819
+ "128227": {
1820
+ "content": "<|reserved_special_token_222|>",
1821
+ "lstrip": false,
1822
+ "normalized": false,
1823
+ "rstrip": false,
1824
+ "single_word": false,
1825
+ "special": true
1826
+ },
1827
+ "128228": {
1828
+ "content": "<|reserved_special_token_223|>",
1829
+ "lstrip": false,
1830
+ "normalized": false,
1831
+ "rstrip": false,
1832
+ "single_word": false,
1833
+ "special": true
1834
+ },
1835
+ "128229": {
1836
+ "content": "<|reserved_special_token_224|>",
1837
+ "lstrip": false,
1838
+ "normalized": false,
1839
+ "rstrip": false,
1840
+ "single_word": false,
1841
+ "special": true
1842
+ },
1843
+ "128230": {
1844
+ "content": "<|reserved_special_token_225|>",
1845
+ "lstrip": false,
1846
+ "normalized": false,
1847
+ "rstrip": false,
1848
+ "single_word": false,
1849
+ "special": true
1850
+ },
1851
+ "128231": {
1852
+ "content": "<|reserved_special_token_226|>",
1853
+ "lstrip": false,
1854
+ "normalized": false,
1855
+ "rstrip": false,
1856
+ "single_word": false,
1857
+ "special": true
1858
+ },
1859
+ "128232": {
1860
+ "content": "<|reserved_special_token_227|>",
1861
+ "lstrip": false,
1862
+ "normalized": false,
1863
+ "rstrip": false,
1864
+ "single_word": false,
1865
+ "special": true
1866
+ },
1867
+ "128233": {
1868
+ "content": "<|reserved_special_token_228|>",
1869
+ "lstrip": false,
1870
+ "normalized": false,
1871
+ "rstrip": false,
1872
+ "single_word": false,
1873
+ "special": true
1874
+ },
1875
+ "128234": {
1876
+ "content": "<|reserved_special_token_229|>",
1877
+ "lstrip": false,
1878
+ "normalized": false,
1879
+ "rstrip": false,
1880
+ "single_word": false,
1881
+ "special": true
1882
+ },
1883
+ "128235": {
1884
+ "content": "<|reserved_special_token_230|>",
1885
+ "lstrip": false,
1886
+ "normalized": false,
1887
+ "rstrip": false,
1888
+ "single_word": false,
1889
+ "special": true
1890
+ },
1891
+ "128236": {
1892
+ "content": "<|reserved_special_token_231|>",
1893
+ "lstrip": false,
1894
+ "normalized": false,
1895
+ "rstrip": false,
1896
+ "single_word": false,
1897
+ "special": true
1898
+ },
1899
+ "128237": {
1900
+ "content": "<|reserved_special_token_232|>",
1901
+ "lstrip": false,
1902
+ "normalized": false,
1903
+ "rstrip": false,
1904
+ "single_word": false,
1905
+ "special": true
1906
+ },
1907
+ "128238": {
1908
+ "content": "<|reserved_special_token_233|>",
1909
+ "lstrip": false,
1910
+ "normalized": false,
1911
+ "rstrip": false,
1912
+ "single_word": false,
1913
+ "special": true
1914
+ },
1915
+ "128239": {
1916
+ "content": "<|reserved_special_token_234|>",
1917
+ "lstrip": false,
1918
+ "normalized": false,
1919
+ "rstrip": false,
1920
+ "single_word": false,
1921
+ "special": true
1922
+ },
1923
+ "128240": {
1924
+ "content": "<|reserved_special_token_235|>",
1925
+ "lstrip": false,
1926
+ "normalized": false,
1927
+ "rstrip": false,
1928
+ "single_word": false,
1929
+ "special": true
1930
+ },
1931
+ "128241": {
1932
+ "content": "<|reserved_special_token_236|>",
1933
+ "lstrip": false,
1934
+ "normalized": false,
1935
+ "rstrip": false,
1936
+ "single_word": false,
1937
+ "special": true
1938
+ },
1939
+ "128242": {
1940
+ "content": "<|reserved_special_token_237|>",
1941
+ "lstrip": false,
1942
+ "normalized": false,
1943
+ "rstrip": false,
1944
+ "single_word": false,
1945
+ "special": true
1946
+ },
1947
+ "128243": {
1948
+ "content": "<|reserved_special_token_238|>",
1949
+ "lstrip": false,
1950
+ "normalized": false,
1951
+ "rstrip": false,
1952
+ "single_word": false,
1953
+ "special": true
1954
+ },
1955
+ "128244": {
1956
+ "content": "<|reserved_special_token_239|>",
1957
+ "lstrip": false,
1958
+ "normalized": false,
1959
+ "rstrip": false,
1960
+ "single_word": false,
1961
+ "special": true
1962
+ },
1963
+ "128245": {
1964
+ "content": "<|reserved_special_token_240|>",
1965
+ "lstrip": false,
1966
+ "normalized": false,
1967
+ "rstrip": false,
1968
+ "single_word": false,
1969
+ "special": true
1970
+ },
1971
+ "128246": {
1972
+ "content": "<|reserved_special_token_241|>",
1973
+ "lstrip": false,
1974
+ "normalized": false,
1975
+ "rstrip": false,
1976
+ "single_word": false,
1977
+ "special": true
1978
+ },
1979
+ "128247": {
1980
+ "content": "<|reserved_special_token_242|>",
1981
+ "lstrip": false,
1982
+ "normalized": false,
1983
+ "rstrip": false,
1984
+ "single_word": false,
1985
+ "special": true
1986
+ },
1987
+ "128248": {
1988
+ "content": "<|reserved_special_token_243|>",
1989
+ "lstrip": false,
1990
+ "normalized": false,
1991
+ "rstrip": false,
1992
+ "single_word": false,
1993
+ "special": true
1994
+ },
1995
+ "128249": {
1996
+ "content": "<|reserved_special_token_244|>",
1997
+ "lstrip": false,
1998
+ "normalized": false,
1999
+ "rstrip": false,
2000
+ "single_word": false,
2001
+ "special": true
2002
+ },
2003
+ "128250": {
2004
+ "content": "<|reserved_special_token_245|>",
2005
+ "lstrip": false,
2006
+ "normalized": false,
2007
+ "rstrip": false,
2008
+ "single_word": false,
2009
+ "special": true
2010
+ },
2011
+ "128251": {
2012
+ "content": "<|reserved_special_token_246|>",
2013
+ "lstrip": false,
2014
+ "normalized": false,
2015
+ "rstrip": false,
2016
+ "single_word": false,
2017
+ "special": true
2018
+ },
2019
+ "128252": {
2020
+ "content": "<|reserved_special_token_247|>",
2021
+ "lstrip": false,
2022
+ "normalized": false,
2023
+ "rstrip": false,
2024
+ "single_word": false,
2025
+ "special": true
2026
+ },
2027
+ "128253": {
2028
+ "content": "<|reserved_special_token_248|>",
2029
+ "lstrip": false,
2030
+ "normalized": false,
2031
+ "rstrip": false,
2032
+ "single_word": false,
2033
+ "special": true
2034
+ },
2035
+ "128254": {
2036
+ "content": "<|reserved_special_token_249|>",
2037
+ "lstrip": false,
2038
+ "normalized": false,
2039
+ "rstrip": false,
2040
+ "single_word": false,
2041
+ "special": true
2042
+ },
2043
+ "128255": {
2044
+ "content": "<|reserved_special_token_250|>",
2045
+ "lstrip": false,
2046
+ "normalized": false,
2047
+ "rstrip": false,
2048
+ "single_word": false,
2049
+ "special": true
2050
+ }
2051
+ },
2052
+ "auto_map": {
2053
+ "AutoTokenizer": [
2054
+ "tokenization_minicpmv_fast.MiniCPMVTokenizerFast",
2055
+ null
2056
+ ]
2057
+ },
2058
+ "bos_token": "<|begin_of_text|>",
2059
+ "chat_template": "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}",
2060
+ "clean_up_tokenization_spaces": true,
2061
+ "eos_token": "<|end_of_text|>",
2062
+ "model_input_names": [
2063
+ "input_ids",
2064
+ "attention_mask"
2065
+ ],
2066
+ "model_max_length": 1000000000000000019884624838656,
2067
+ "pad_token": "!",
2068
+ "padding_side": "right",
2069
+ "tokenizer_class": "MiniCPMVTokenizerFast",
2070
+ "truncation_side": "right",
2071
+ "unk_token": "<unk>"
2072
+ }