update readme
Browse files
README.md
CHANGED
@@ -125,6 +125,8 @@ Or you can launch an OpenAI compatible server with the following command:
|
|
125 |
lmdeploy serve api_server internlm/internlm2-chat-7b --model-name internlm2-chat-7b --server-port 23333
|
126 |
```
|
127 |
|
|
|
|
|
128 |
```bash
|
129 |
curl http://localhost:23333/v1/chat/completions \
|
130 |
-H "Content-Type: application/json" \
|
@@ -151,6 +153,8 @@ pip install vllm
|
|
151 |
python -m vllm.entrypoints.openai.api_server --model internlm/internlm2-chat-7b --served-model-name internlm2-chat-7b --trust-remote-code
|
152 |
```
|
153 |
|
|
|
|
|
154 |
```bash
|
155 |
curl http://localhost:8000/v1/chat/completions \
|
156 |
-H "Content-Type: application/json" \
|
@@ -272,6 +276,20 @@ print(response)
|
|
272 |
lmdeploy serve api_server internlm/internlm2-chat-7b --server-port 23333
|
273 |
```
|
274 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
275 |
更多信息请查看 [LMDeploy 文档](https://lmdeploy.readthedocs.io/en/latest/)
|
276 |
|
277 |
### vLLM
|
@@ -286,6 +304,20 @@ pip install vllm
|
|
286 |
python -m vllm.entrypoints.openai.api_server --model internlm/internlm2-chat-7b --trust-remote-code
|
287 |
```
|
288 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
289 |
更多信息请查看 [vLLM 文档](https://docs.vllm.ai/en/latest/index.html)
|
290 |
|
291 |
## 开源许可证
|
|
|
125 |
lmdeploy serve api_server internlm/internlm2-chat-7b --model-name internlm2-chat-7b --server-port 23333
|
126 |
```
|
127 |
|
128 |
+
Then you can send a chat request to the server:
|
129 |
+
|
130 |
```bash
|
131 |
curl http://localhost:23333/v1/chat/completions \
|
132 |
-H "Content-Type: application/json" \
|
|
|
153 |
python -m vllm.entrypoints.openai.api_server --model internlm/internlm2-chat-7b --served-model-name internlm2-chat-7b --trust-remote-code
|
154 |
```
|
155 |
|
156 |
+
Then you can send a chat request to the server:
|
157 |
+
|
158 |
```bash
|
159 |
curl http://localhost:8000/v1/chat/completions \
|
160 |
-H "Content-Type: application/json" \
|
|
|
276 |
lmdeploy serve api_server internlm/internlm2-chat-7b --server-port 23333
|
277 |
```
|
278 |
|
279 |
+
然后你可以向服务端发起一个聊天请求:
|
280 |
+
|
281 |
+
```bash
|
282 |
+
curl http://localhost:23333/v1/chat/completions \
|
283 |
+
-H "Content-Type: application/json" \
|
284 |
+
-d '{
|
285 |
+
"model": "internlm2-chat-7b",
|
286 |
+
"messages": [
|
287 |
+
{"role": "system", "content": "You are a helpful assistant."},
|
288 |
+
{"role": "user", "content": "Introduce deep learning to me."}
|
289 |
+
]
|
290 |
+
}'
|
291 |
+
```
|
292 |
+
|
293 |
更多信息请查看 [LMDeploy 文档](https://lmdeploy.readthedocs.io/en/latest/)
|
294 |
|
295 |
### vLLM
|
|
|
304 |
python -m vllm.entrypoints.openai.api_server --model internlm/internlm2-chat-7b --trust-remote-code
|
305 |
```
|
306 |
|
307 |
+
然后你可以向服务端发起一个聊天请求:
|
308 |
+
|
309 |
+
```bash
|
310 |
+
curl http://localhost:8000/v1/chat/completions \
|
311 |
+
-H "Content-Type: application/json" \
|
312 |
+
-d '{
|
313 |
+
"model": "internlm2-chat-7b",
|
314 |
+
"messages": [
|
315 |
+
{"role": "system", "content": "You are a helpful assistant."},
|
316 |
+
{"role": "user", "content": "Introduce deep learning to me."}
|
317 |
+
]
|
318 |
+
}'
|
319 |
+
```
|
320 |
+
|
321 |
更多信息请查看 [vLLM 文档](https://docs.vllm.ai/en/latest/index.html)
|
322 |
|
323 |
## 开源许可证
|