Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,58 @@
|
|
1 |
---
|
2 |
license: other
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: other
|
3 |
---
|
4 |
+
|
5 |
+
-Alpac(ino) stands for Alpaca Integrated Narrative Optimization.
|
6 |
+
|
7 |
+
This model is a triple model merge of (Alpaca+(CoT+Storytelling)), resulting in a comprehensive boost in Alpaca's reasoning and story writing capabilities.
|
8 |
+
Alpaca was chosen as the backbone of this merge to ensure Alpaca's instruct format remains dominant.
|
9 |
+
|
10 |
+
-Legalese:
|
11 |
+
|
12 |
+
This model is under a non-commercial license. This release contains modified weights of Llama30b and is commensurate with good faith that those
|
13 |
+
who download and/or utilize this model have been granted explicit access to the original Llama weights by Meta AI after filling out the following
|
14 |
+
form-
|
15 |
+
https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform
|
16 |
+
|
17 |
+
-Use Case Example of an Infinite Text-Based Adventure Game With Alpacino30b:
|
18 |
+
|
19 |
+
In Text-Generation-WebUI or KoboldAI enable chat mode, name the user Player and name the AI Narrator, then tailor the instructions below as desired and paste in
|
20 |
+
context/memory field-
|
21 |
+
|
22 |
+
|
23 |
+
\#\#\# Instruction:(carriage return)
|
24 |
+
Make Narrator function as a text based adventure game that responds with verbose, detailed, and creative descriptions of what happens next after Player's response.
|
25 |
+
Make Player function as the player input for Narrator's text based adventure game, controlling a character named (insert character name here, their short bio, and
|
26 |
+
whatever quest or other information to keep consistent in the interaction).
|
27 |
+
\#\#\# Response:(carriage return)
|
28 |
+
|
29 |
+
Testing subjectively suggests ideal presets for both TGUI and KAI are "Storywriter" (temp raised to 1.1) or "Godlike" with context tokens
|
30 |
+
at 2048 and max generation tokens at ~680 or greater. This model will determine when to stop writing and will rarely use half as many tokens.
|
31 |
+
|
32 |
+
-Obligatory:
|
33 |
+
|
34 |
+
This model may output offensive text and/or fabricated information; do not use this model for advice
|
35 |
+
in any domain, especially medical or mental health advice. Meta AI and I are not liable for improper
|
36 |
+
use or any damages, percieved or otherwise.
|
37 |
+
|
38 |
+
-Sourced LoRA Credits:
|
39 |
+
|
40 |
+
ChanSung's exellently made Alpaca LoRA
|
41 |
+
|
42 |
+
https://huggingface.co/chansung/alpaca-lora-30b
|
43 |
+
|
44 |
+
https://huggingface.co/datasets/yahma/alpaca-cleaned
|
45 |
+
|
46 |
+
https://github.com/gururise/AlpacaDataCleaned
|
47 |
+
|
48 |
+
magicgh's valuable CoT LoRA
|
49 |
+
|
50 |
+
https://huggingface.co/magicgh/llama30b-lora-cot
|
51 |
+
|
52 |
+
https://huggingface.co/datasets/QingyiSi/Alpaca-CoT
|
53 |
+
|
54 |
+
https://github.com/PhoebusSi/alpaca-CoT
|
55 |
+
|
56 |
+
GamerUntouch's unique Storytelling LoRA
|
57 |
+
|
58 |
+
https://huggingface.co/GamerUntouch/Storytelling-LLaMa-LoRAs
|