Update README.md
Browse files
README.md
CHANGED
@@ -36,6 +36,13 @@ pipeline_tag: text-generation
|
|
36 |
|
37 |
**🔥 **
|
38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
<div align="center" style="justify-content: center; align-items: center; "'>
|
40 |
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/Untitled%20design-17.png?raw=true" style="width: 100%; border-radius: 0.5em">
|
41 |
</div>
|
@@ -55,6 +62,8 @@ If you want to deploy the server as an online service, you can use `--api-keys s
|
|
55 |
<details>
|
56 |
<summary>Example request (click to expand)</summary>
|
57 |
|
|
|
|
|
58 |
```bash
|
59 |
curl http://localhost:18888/v1/chat/completions \
|
60 |
-H "Content-Type: application/json" \
|
@@ -64,49 +73,39 @@ curl http://localhost:18888/v1/chat/completions \
|
|
64 |
}'
|
65 |
```
|
66 |
|
67 |
-
|
68 |
|
69 |
```bash
|
70 |
curl http://localhost:18888/v1/chat/completions \
|
71 |
-H "Content-Type: application/json" \
|
72 |
-d '{
|
73 |
"model": "openchat_3.5",
|
74 |
-
"condition": "
|
75 |
-
"messages": [{"role": "user", "content": "
|
76 |
}'
|
77 |
```
|
78 |
|
79 |
</details>
|
80 |
|
81 |
-
| Model
|
82 |
-
|
83 |
-
| OpenChat 3.5 | 7B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat_3.
|
84 |
|
85 |
-
|
86 |
|
87 |
-
|
88 |
-
<summary>Conversation templates (click to expand)</summary>
|
89 |
|
90 |
-
```
|
91 |
-
|
92 |
-
|
93 |
|
94 |
-
|
95 |
-
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
|
96 |
-
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
|
97 |
|
98 |
-
|
99 |
-
|
100 |
-
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
|
101 |
-
|
102 |
-
# Coding Mode
|
103 |
-
tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
|
104 |
-
assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
|
105 |
```
|
106 |
|
107 |
-
|
108 |
-
|
109 |
-
The GPT4 template is also available as the integrated `tokenizer.chat_template`,
|
110 |
which can be used instead of manually specifying the template:
|
111 |
|
112 |
```python
|
@@ -121,8 +120,6 @@ assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 42
|
|
121 |
|
122 |
## Comparison with [X.AI Grok models](https://x.ai/)
|
123 |
|
124 |
-
|
125 |
-
|
126 |
| | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k |
|
127 |
|-------------------|-------------|---------|----------|------|-----------|----------|----------|
|
128 |
| OpenChat 3.5 1210 | Apache-2.0 | **7B** | **60.1** | 65.3 | **68.9** | **28.9** | **77.3** |
|
|
|
36 |
|
37 |
**🔥 **
|
38 |
|
39 |
+
| Model | HumanEval+ |
|
40 |
+
|-----------------------------|------------|
|
41 |
+
| WizardCoder-Python-34B-V1.0 | 64.6 |
|
42 |
+
| GPT-3.5 (December 2023) | 64.6 |
|
43 |
+
| **OpenChat 3.5 1210** | **63.4** |
|
44 |
+
| OpenHermes 2.5 | 41.5 |
|
45 |
+
|
46 |
<div align="center" style="justify-content: center; align-items: center; "'>
|
47 |
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/Untitled%20design-17.png?raw=true" style="width: 100%; border-radius: 0.5em">
|
48 |
</div>
|
|
|
62 |
<details>
|
63 |
<summary>Example request (click to expand)</summary>
|
64 |
|
65 |
+
Default Mode (Chat & Coding)
|
66 |
+
|
67 |
```bash
|
68 |
curl http://localhost:18888/v1/chat/completions \
|
69 |
-H "Content-Type: application/json" \
|
|
|
73 |
}'
|
74 |
```
|
75 |
|
76 |
+
Mathematical Reasoning Mode
|
77 |
|
78 |
```bash
|
79 |
curl http://localhost:18888/v1/chat/completions \
|
80 |
-H "Content-Type: application/json" \
|
81 |
-d '{
|
82 |
"model": "openchat_3.5",
|
83 |
+
"condition": "Math Correct",
|
84 |
+
"messages": [{"role": "user", "content": "10.3 − 7988.8133 = "}]
|
85 |
}'
|
86 |
```
|
87 |
|
88 |
</details>
|
89 |
|
90 |
+
| Model | Size | Context | Weights | Serving |
|
91 |
+
|-------------------|------|---------|------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------|
|
92 |
+
| OpenChat 3.5 1210 | 7B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat_3.5_1210) | `python -m ochat.serving.openai_api_server --model openchat/openchat_3.5_1210 --engine-use-ray --worker-use-ray` |
|
93 |
|
94 |
+
### Conversation templates
|
95 |
|
96 |
+
Default Mode (GPT4 Correct)
|
|
|
97 |
|
98 |
+
```
|
99 |
+
GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:
|
100 |
+
```
|
101 |
|
102 |
+
Mathematical Reasoning Mode
|
|
|
|
|
103 |
|
104 |
+
```
|
105 |
+
Math Correct User: 10.3 − 7988.8133=<|end_of_turn|>Math Correct Assistant:
|
|
|
|
|
|
|
|
|
|
|
106 |
```
|
107 |
|
108 |
+
The default (GPT4 Correct) template is also available as the integrated `tokenizer.chat_template`,
|
|
|
|
|
109 |
which can be used instead of manually specifying the template:
|
110 |
|
111 |
```python
|
|
|
120 |
|
121 |
## Comparison with [X.AI Grok models](https://x.ai/)
|
122 |
|
|
|
|
|
123 |
| | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k |
|
124 |
|-------------------|-------------|---------|----------|------|-----------|----------|----------|
|
125 |
| OpenChat 3.5 1210 | Apache-2.0 | **7B** | **60.1** | 65.3 | **68.9** | **28.9** | **77.3** |
|