SicariusSicariiStuff
commited on
Commit
•
6aa0083
1
Parent(s):
4befe2b
Update README.md
Browse files
README.md
CHANGED
@@ -32,7 +32,7 @@ language:
|
|
32 |
"K."
|
33 |
|
34 |
|
35 |
-
This model was trained on ~25M tokens, in **3 phases**, the first and longest phase was an FFT to teach the model new stuff, and to confuse the shit out of it, so it would be **a little bit less inclined to use GPTisms**.
|
36 |
|
37 |
It worked pretty well. In fact, the model was so damn confused, that the little imp didn't even make sense, but the knowledge was there.
|
38 |
|
|
|
32 |
"K."
|
33 |
|
34 |
|
35 |
+
This model was trained on ~25M tokens, in **3 phases**, the first and longest phase was an FFT to teach the model new stuff, and to confuse the shit out of it too, so it would be **a little bit less inclined to use GPTisms**.
|
36 |
|
37 |
It worked pretty well. In fact, the model was so damn confused, that the little imp didn't even make sense, but the knowledge was there.
|
38 |
|