Spaces:
Running
Running
Husnain
commited on
Commit
•
13e2bc6
1
Parent(s):
943f394
📝 [Doc] Readme: Fix typo of gpt-3.5-turbo, and update docker tag
Browse files
README.md
CHANGED
@@ -13,12 +13,12 @@ app_port: 23333
|
|
13 |
|
14 |
Huggingface LLM Inference API in OpenAI message format.
|
15 |
|
16 |
-
|
17 |
|
18 |
## Features
|
19 |
|
20 |
-
- Available Models (2024/04/07):
|
21 |
-
- `mistral-7b`, `mixtral-8x7b`, `nous-mixtral-8x7b`, `gemma-1.1-7b`, `
|
22 |
- Adaptive prompt templates for different models
|
23 |
- Support OpenAI API format
|
24 |
- Enable api endpoint via official `openai-python` package
|
@@ -48,17 +48,17 @@ python -m apis.chat_api
|
|
48 |
**Docker build:**
|
49 |
|
50 |
```bash
|
51 |
-
sudo docker build -t hf-llm-api:1.
|
52 |
```
|
53 |
|
54 |
**Docker run:**
|
55 |
|
56 |
```bash
|
57 |
# no proxy
|
58 |
-
sudo docker run -p 23333:23333 hf-llm-api:1.
|
59 |
|
60 |
# with proxy
|
61 |
-
sudo docker run -p 23333:23333 --env http_proxy="http://<server>:<port>" hf-llm-api:1.
|
62 |
```
|
63 |
|
64 |
## API Usage
|
|
|
13 |
|
14 |
Huggingface LLM Inference API in OpenAI message format.
|
15 |
|
16 |
+
Project link: https://github.com/Niansuh/HF-LLM-API
|
17 |
|
18 |
## Features
|
19 |
|
20 |
+
- Available Models (2024/04/07):
|
21 |
+
- `mistral-7b`, `mixtral-8x7b`, `nous-mixtral-8x7b`, `gemma-1.1-7b`, `gpt-3.5-turbo`
|
22 |
- Adaptive prompt templates for different models
|
23 |
- Support OpenAI API format
|
24 |
- Enable api endpoint via official `openai-python` package
|
|
|
48 |
**Docker build:**
|
49 |
|
50 |
```bash
|
51 |
+
sudo docker build -t hf-llm-api:1.1.3 . --build-arg http_proxy=$http_proxy --build-arg https_proxy=$https_proxy
|
52 |
```
|
53 |
|
54 |
**Docker run:**
|
55 |
|
56 |
```bash
|
57 |
# no proxy
|
58 |
+
sudo docker run -p 23333:23333 hf-llm-api:1.1.3
|
59 |
|
60 |
# with proxy
|
61 |
+
sudo docker run -p 23333:23333 --env http_proxy="http://<server>:<port>" hf-llm-api:1.1.3
|
62 |
```
|
63 |
|
64 |
## API Usage
|