apepkuss79 commited on
Commit
24b2243
1 Parent(s): 0787dd8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -19
README.md CHANGED
@@ -25,36 +25,30 @@ language:
25
 
26
  ## Run with LlamaEdge
27
 
28
- - LlamaEdge version: coming soon
29
-
30
- <!-- - LlamaEdge version: [v0.12.4](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.12.4) and above
31
 
32
  - Prompt template
33
 
34
- - Prompt type: `llama-3-chat`
35
 
36
  - Prompt string
37
 
38
  ```text
39
- <|begin_of_text|><|start_header_id|>system<|end_header_id|>
40
-
41
- {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
42
-
43
- {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
44
-
45
- {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
46
-
47
- {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
48
- ``` -->
49
 
50
  - Context size: `2048`
51
 
52
- <!-- - Run as LlamaEdge service
53
 
54
  ```bash
55
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:SmolLM-135M-Instruct-Q5_K_M.gguf \
56
  llama-api-server.wasm \
57
- --prompt-template llama-3-chat \
58
  --ctx-size 2048 \
59
  --model-name SmolLM-135M-Instruct
60
  ```
@@ -64,9 +58,9 @@ language:
64
  ```bash
65
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:SmolLM-135M-Instruct-Q5_K_M.gguf \
66
  llama-chat.wasm \
67
- --prompt-template llama-3-chat \
68
- --ctx-size 2048 \
69
- ``` -->
70
 
71
  ## Quantized GGUF Models
72
 
 
25
 
26
  ## Run with LlamaEdge
27
 
28
+ - LlamaEdge version: [v0.12.5](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.12.5) and above
 
 
29
 
30
  - Prompt template
31
 
32
+ - Prompt type: `chatml`
33
 
34
  - Prompt string
35
 
36
  ```text
37
+ <|im_start|>system
38
+ {system_message}<|im_end|>
39
+ <|im_start|>user
40
+ {prompt}<|im_end|>
41
+ <|im_start|>assistant
42
+ ```
 
 
 
 
43
 
44
  - Context size: `2048`
45
 
46
+ - Run as LlamaEdge service
47
 
48
  ```bash
49
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:SmolLM-135M-Instruct-Q5_K_M.gguf \
50
  llama-api-server.wasm \
51
+ --prompt-template chatml \
52
  --ctx-size 2048 \
53
  --model-name SmolLM-135M-Instruct
54
  ```
 
58
  ```bash
59
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:SmolLM-135M-Instruct-Q5_K_M.gguf \
60
  llama-chat.wasm \
61
+ --prompt-template chatml \
62
+ --ctx-size 2048
63
+ ```
64
 
65
  ## Quantized GGUF Models
66