ericpolewski
commited on
Commit
•
c1bcf95
1
Parent(s):
a63a16d
Update README.md
Browse files
README.md
CHANGED
@@ -5,6 +5,7 @@ datasets:
|
|
5 |
- ise-uiuc/Magicoder-Evol-Instruct-110K
|
6 |
- tatsu-lab/alpaca
|
7 |
- garage-bAInd/Open-Platypus
|
|
|
8 |
---
|
9 |
|
10 |
This is Mistral-v0.1 and a combination of the AIRIC dataset sprinkled into the other datasets listed. Trained for 3 epochs on the q-v-k-o layers at rank 128 until loss hit about 1.37. I noticed some "it's important to remembers" in there that I may try to scrub out but otherwise the model wasn't intentionally censored.
|
@@ -14,11 +15,3 @@ This was the original post: https://www.reddit.com/r/LocalLLaMA/comments/154to1w
|
|
14 |
This is how I did the data extraction: https://www.linkedin.com/pulse/how-i-trained-ai-my-text-messages-make-robot-talks-like-eric-polewski-9nu1c/
|
15 |
|
16 |
This is an instruct model trained in the Alpaca format:
|
17 |
-
|
18 |
-
---
|
19 |
-
### Instruction:
|
20 |
-
(the question)
|
21 |
-
|
22 |
-
### Response:
|
23 |
-
(the response)
|
24 |
-
---
|
|
|
5 |
- ise-uiuc/Magicoder-Evol-Instruct-110K
|
6 |
- tatsu-lab/alpaca
|
7 |
- garage-bAInd/Open-Platypus
|
8 |
+
template: Instruct
|
9 |
---
|
10 |
|
11 |
This is Mistral-v0.1 and a combination of the AIRIC dataset sprinkled into the other datasets listed. Trained for 3 epochs on the q-v-k-o layers at rank 128 until loss hit about 1.37. I noticed some "it's important to remembers" in there that I may try to scrub out but otherwise the model wasn't intentionally censored.
|
|
|
15 |
This is how I did the data extraction: https://www.linkedin.com/pulse/how-i-trained-ai-my-text-messages-make-robot-talks-like-eric-polewski-9nu1c/
|
16 |
|
17 |
This is an instruct model trained in the Alpaca format:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|