dranger003 commited on
Commit
f021a94
1 Parent(s): 3ec3baf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -12,6 +12,6 @@ The importance matrix was trained for 100K tokens (200 batches of 512 tokens) us
12
  * Using Q8 K-cache (instead of F16) you can fit up to 43-44K context but inference speed goes down a little bit.
13
  * Also for some reason I need to use 1.0 penalty to avoid the response being cut-off.
14
 
15
- | Layers | Context | Template |
16
  | --- | --- | --- |
17
  | <pre>32</pre> | <pre>131072</pre> | <pre>You are a helpful assistant.<br>USER:<br>{context}<br>{question}<br>Don't give information outside the document or repeat your findings. Keep your response short and direct.<br>ASSISTANT:<br>{response}</pre> |
 
12
  * Using Q8 K-cache (instead of F16) you can fit up to 43-44K context but inference speed goes down a little bit.
13
  * Also for some reason I need to use 1.0 penalty to avoid the response being cut-off.
14
 
15
+ | Layers | Context | [Template](https://github.com/LargeWorldModel/LWM/blob/9aaaa1e864bfcf31b66028e782395a22f4817535/scripts/eval_needle.py#L48) |
16
  | --- | --- | --- |
17
  | <pre>32</pre> | <pre>131072</pre> | <pre>You are a helpful assistant.<br>USER:<br>{context}<br>{question}<br>Don't give information outside the document or repeat your findings. Keep your response short and direct.<br>ASSISTANT:<br>{response}</pre> |