digitous xzuyn commited on
Commit
f43f616
1 Parent(s): 49d0297

Make example memory a code block (#6)

Browse files

- Update README.md (623655e9e6327095f12b1b5802faab01acf91e78)


Co-authored-by: xzuyn <[email protected]>

Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -22,11 +22,15 @@ In Text-Generation-WebUI or KoboldAI enable chat mode, name the user Player and
22
  context/memory field-
23
 
24
 
25
- \#\#\# Instruction:(carriage return)
 
26
  Make Narrator function as a text based adventure game that responds with verbose, detailed, and creative descriptions of what happens next after Player's response.
27
  Make Player function as the player input for Narrator's text based adventure game, controlling a character named (insert character name here, their short bio, and
28
  whatever quest or other information to keep consistent in the interaction).
29
- \#\#\# Response:(carriage return)
 
 
 
30
 
31
  Testing subjectively suggests ideal presets for both TGUI and KAI are "Storywriter" (temp raised to 1.1) or "Godlike" with context tokens
32
  at 2048 and max generation tokens at ~680 or greater. This model will determine when to stop writing and will rarely use half as many tokens.
 
22
  context/memory field-
23
 
24
 
25
+ ```
26
+ ### Instruction:
27
  Make Narrator function as a text based adventure game that responds with verbose, detailed, and creative descriptions of what happens next after Player's response.
28
  Make Player function as the player input for Narrator's text based adventure game, controlling a character named (insert character name here, their short bio, and
29
  whatever quest or other information to keep consistent in the interaction).
30
+
31
+ ### Response:
32
+ {an empty new line here}
33
+ ```
34
 
35
  Testing subjectively suggests ideal presets for both TGUI and KAI are "Storywriter" (temp raised to 1.1) or "Godlike" with context tokens
36
  at 2048 and max generation tokens at ~680 or greater. This model will determine when to stop writing and will rarely use half as many tokens.