dranger003 commited on
Commit
cbb911c
1 Parent(s): 3234107

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -12,7 +12,7 @@ The importance matrix was trained for 100K tokens (200 batches of 512 tokens) us
12
  * Using Q8 K-cache (instead of F16) you can fit up to 43-44K context but inference speed goes down a little bit.
13
  * Also for some reason I need to use 1.0 penalty to avoid the response being cut-off.
14
 
15
- Prompt template (ends with a space after `ASSISTANT:`)
16
  ```
17
  You are a helpful assistant. USER: {context} {question} Don't give information outside the document or repeat your findings. Keep your response short and direct. ASSISTANT:
18
  ```
 
12
  * Using Q8 K-cache (instead of F16) you can fit up to 43-44K context but inference speed goes down a little bit.
13
  * Also for some reason I need to use 1.0 penalty to avoid the response being cut-off.
14
 
15
+ Prompt template:
16
  ```
17
  You are a helpful assistant. USER: {context} {question} Don't give information outside the document or repeat your findings. Keep your response short and direct. ASSISTANT:
18
  ```