Update README.md
Browse files
README.md
CHANGED
@@ -87,56 +87,7 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
87 |
|
88 |
To handle extensive inputs exceeding 32,768 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
|
89 |
|
90 |
-
For deployment, we recommend using vLLM.
|
91 |
-
|
92 |
-
1. **Install vLLM**: You can install vLLM by running the following command.
|
93 |
-
|
94 |
-
```bash
|
95 |
-
pip install "vllm>=0.4.3"
|
96 |
-
```
|
97 |
-
|
98 |
-
Or you can install vLLM from [source](https://github.com/vllm-project/vllm/).
|
99 |
-
|
100 |
-
2. **Configure Model Settings**: After downloading the model weights, modify the `config.json` file by including the below snippet:
|
101 |
-
```json
|
102 |
-
{
|
103 |
-
"architectures": [
|
104 |
-
"Qwen2ForCausalLM"
|
105 |
-
],
|
106 |
-
// ...
|
107 |
-
"vocab_size": 152064,
|
108 |
-
|
109 |
-
// adding the following snippets
|
110 |
-
"rope_scaling": {
|
111 |
-
"factor": 4.0,
|
112 |
-
"original_max_position_embeddings": 32768,
|
113 |
-
"type": "yarn"
|
114 |
-
}
|
115 |
-
}
|
116 |
-
```
|
117 |
-
This snippet enable YARN to support longer contexts.
|
118 |
-
|
119 |
-
3. **Model Deployment**: Utilize vLLM to deploy your model. For instance, you can set up an openAI-like server using the command:
|
120 |
-
|
121 |
-
```bash
|
122 |
-
python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-72B-Instruct --model path/to/weights
|
123 |
-
```
|
124 |
-
|
125 |
-
Then you can access the Chat API by:
|
126 |
-
|
127 |
-
```bash
|
128 |
-
curl http://localhost:8000/v1/chat/completions \
|
129 |
-
-H "Content-Type: application/json" \
|
130 |
-
-d '{
|
131 |
-
"model": "Qwen2-72B-Instruct",
|
132 |
-
"messages": [
|
133 |
-
{"role": "system", "content": "You are a helpful assistant."},
|
134 |
-
{"role": "user", "content": "Your Long Input Here."}
|
135 |
-
]
|
136 |
-
}'
|
137 |
-
```
|
138 |
-
|
139 |
-
For further usage instructions of vLLM, please refer to our [Github](https://github.com/QwenLM/Qwen2).
|
140 |
|
141 |
**Note**: Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required.
|
142 |
|
|
|
87 |
|
88 |
To handle extensive inputs exceeding 32,768 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
|
89 |
|
90 |
+
For deployment, we recommend using vLLM. Please refer to our [Github](https://github.com/QwenLM/Qwen2.5) for usage if you are not familar with vLLM.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
91 |
|
92 |
**Note**: Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required.
|
93 |
|