Update README.md
Browse files
README.md
CHANGED
@@ -85,56 +85,7 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
85 |
|
86 |
To handle extensive inputs exceeding 32,768 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
|
87 |
|
88 |
-
For deployment, we recommend using vLLM.
|
89 |
-
|
90 |
-
1. **Install vLLM**: You can install vLLM by running the following command.
|
91 |
-
|
92 |
-
```bash
|
93 |
-
pip install "vllm>=0.4.3"
|
94 |
-
```
|
95 |
-
|
96 |
-
Or you can install vLLM from [source](https://github.com/vllm-project/vllm/).
|
97 |
-
|
98 |
-
2. **Configure Model Settings**: After downloading the model weights, modify the `config.json` file by including the below snippet:
|
99 |
-
```json
|
100 |
-
{
|
101 |
-
"architectures": [
|
102 |
-
"Qwen2ForCausalLM"
|
103 |
-
],
|
104 |
-
// ...
|
105 |
-
"vocab_size": 152064,
|
106 |
-
|
107 |
-
// adding the following snippets
|
108 |
-
"rope_scaling": {
|
109 |
-
"factor": 4.0,
|
110 |
-
"original_max_position_embeddings": 32768,
|
111 |
-
"type": "yarn"
|
112 |
-
}
|
113 |
-
}
|
114 |
-
```
|
115 |
-
This snippet enable YARN to support longer contexts.
|
116 |
-
|
117 |
-
3. **Model Deployment**: Utilize vLLM to deploy your model. For instance, you can set up an openAI-like server using the command:
|
118 |
-
|
119 |
-
```bash
|
120 |
-
python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-32B-Instruct --model path/to/weights
|
121 |
-
```
|
122 |
-
|
123 |
-
Then you can access the Chat API by:
|
124 |
-
|
125 |
-
```bash
|
126 |
-
curl http://localhost:8000/v1/chat/completions \
|
127 |
-
-H "Content-Type: application/json" \
|
128 |
-
-d '{
|
129 |
-
"model": "Qwen2-32B-Instruct",
|
130 |
-
"messages": [
|
131 |
-
{"role": "system", "content": "You are a helpful assistant."},
|
132 |
-
{"role": "user", "content": "Your Long Input Here."}
|
133 |
-
]
|
134 |
-
}'
|
135 |
-
```
|
136 |
-
|
137 |
-
For further usage instructions of vLLM, please refer to our [Github](https://github.com/QwenLM/Qwen2).
|
138 |
|
139 |
**Note**: Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required.
|
140 |
|
|
|
85 |
|
86 |
To handle extensive inputs exceeding 32,768 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
|
87 |
|
88 |
+
For deployment, we recommend using vLLM. Please refer to our [Github](https://github.com/QwenLM/Qwen2.5) for usage if you are not familar with vLLM.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
89 |
|
90 |
**Note**: Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required.
|
91 |
|