jklj077 commited on
Commit
aa1b431
1 Parent(s): 605f111

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -44
README.md CHANGED
@@ -79,50 +79,8 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
79
 
80
  To handle extensive inputs exceeding 32,768 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
81
 
82
- For deployment, we recommend using vLLM. You can enable the long-context capabilities by following these steps:
83
-
84
- 1. **Install vLLM**: You can install vLLM by running the following command.
85
-
86
- ```bash
87
- pip install "vllm>=0.4.3"
88
- ```
89
-
90
- Or you can install vLLM from [source](https://github.com/vllm-project/vllm/).
91
- 2. **Configure Model Settings**: After downloading the model weights, modify the `config.json` file by including the below snippet:
92
- ```json
93
- {
94
- "architectures": [
95
- "Qwen2ForCausalLM"
96
- ],
97
- // ...
98
- "vocab_size": 152064,
99
- // adding the following snippets
100
- "rope_scaling": {
101
- "factor": 4.0,
102
- "original_max_position_embeddings": 32768,
103
- "type": "yarn"
104
- }
105
- }
106
- ```
107
- This snippet enable YARN to support longer contexts.
108
- 3. **Model Deployment**: Utilize vLLM to deploy your model. For instance, you can set up an openAI-like server using the command:
109
-
110
- ```bash
111
- python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2.5-7B-Instruct --model path/to/weights
112
- ```
113
- Then you can access the Chat API by:
114
- ```bash
115
- curl http://localhost:8000/v1/chat/completions \
116
- -H "Content-Type: application/json" \
117
- -d '{
118
- "model": "Qwen2.5-7B-Instruct",
119
- "messages": [
120
- {"role": "system", "content": "You are a helpful assistant."},
121
- {"role": "user", "content": "Your Long Input Here."}
122
- ]
123
- }'
124
- ```
125
- For further usage instructions of vLLM, please refer to our [Github](https://github.com/QwenLM/Qwen2.5).
126
  **Note**: Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required.
127
 
128
  ## Evaultion & Performance
 
79
 
80
  To handle extensive inputs exceeding 32,768 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
81
 
82
+ For deployment, we recommend using vLLM. Please refer to our [Github](https://github.com/QwenLM/Qwen2.5) for usage if you are not familar with vLLM.
83
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84
  **Note**: Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required.
85
 
86
  ## Evaultion & Performance