apepkuss79 commited on
Commit
f58ded6
1 Parent(s): 55c1e54

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -16
README.md CHANGED
@@ -36,34 +36,30 @@ tags:
36
 
37
  - LlamaEdge version: coming soon
38
 
39
- <!-- - LlamaEdge version: [v0.12.4](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.12.4) and above
40
 
41
  - Prompt template
42
 
43
- - Prompt type: `llama-3-chat`
44
 
45
  - Prompt string
46
 
47
  ```text
48
- <|begin_of_text|><|start_header_id|>system<|end_header_id|>
49
-
50
- {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
51
-
52
- {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
53
-
54
- {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
55
-
56
- {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
57
- ``` -->
58
 
59
  - Context size: `4096`
60
 
61
- <!-- - Run as LlamaEdge service
62
 
63
  ```bash
64
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:EXAONE-3.0-7.8B-Instruct-Q5_K_M.gguf \
65
  llama-api-server.wasm \
66
- --prompt-template llama-3-chat \
67
  --ctx-size 4096 \
68
  --model-name EXAONE-3.0-7.8B-Instruct
69
  ```
@@ -73,9 +69,9 @@ tags:
73
  ```bash
74
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:EXAONE-3.0-7.8B-Instruct-Q5_K_M.gguf \
75
  llama-chat.wasm \
76
- --prompt-template llama-3-chat \
77
  --ctx-size 4096
78
- ``` -->
79
 
80
  ## Quantized GGUF Models
81
 
 
36
 
37
  - LlamaEdge version: coming soon
38
 
39
+ <!-- - LlamaEdge version: [v0.12.4](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.12.4) and above -->
40
 
41
  - Prompt template
42
 
43
+ - Prompt type: `exaone-chat`
44
 
45
  - Prompt string
46
 
47
  ```text
48
+ [|system|]system_prompt_text[|endofturn|]
49
+ [|user|]user_1st_turn_text
50
+ [|assistant|]assistant_1st_turn_text[|endofturn|]
51
+ [|user|]user_2nd_turn_text
52
+ [|assistant|]assistant_2nd_turn_text[|endofturn|]
53
+ ```
 
 
 
 
54
 
55
  - Context size: `4096`
56
 
57
+ - Run as LlamaEdge service
58
 
59
  ```bash
60
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:EXAONE-3.0-7.8B-Instruct-Q5_K_M.gguf \
61
  llama-api-server.wasm \
62
+ --prompt-template exaone-chat \
63
  --ctx-size 4096 \
64
  --model-name EXAONE-3.0-7.8B-Instruct
65
  ```
 
69
  ```bash
70
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:EXAONE-3.0-7.8B-Instruct-Q5_K_M.gguf \
71
  llama-chat.wasm \
72
+ --prompt-template exaone-chat \
73
  --ctx-size 4096
74
+ ```
75
 
76
  ## Quantized GGUF Models
77