athirdpath
commited on
Commit
•
575be33
1
Parent(s):
8eed6d5
Update README.md
Browse files
README.md
CHANGED
@@ -7,12 +7,12 @@ model-index:
|
|
7 |
- name: lora
|
8 |
results: []
|
9 |
---
|
|
|
|
|
10 |
|
11 |
-
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
12 |
-
should probably proofread and complete it, then remove this comment. -->
|
13 |
|
14 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
15 |
-
#
|
16 |
|
17 |
This model is a fine-tuned version of [athirdpath/Harmonia-20B](https://huggingface.co/athirdpath/Harmonia-20B) on the HF No Robots dataset.
|
18 |
It achieves the following results on the evaluation set:
|
|
|
7 |
- name: lora
|
8 |
results: []
|
9 |
---
|
10 |
+
This was mostly a test to see what the loss/eval looked like when training on top of Harmonia, and in that sense it was a sterling success, without the "jitter" I experienced training on top of Nethena 20b.
|
11 |
+
Quick testing shows a bit of derpiness, but a nice conversational flow. Overall, this will be helpful in developing additional 20b merges.
|
12 |
|
|
|
|
|
13 |
|
14 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
15 |
+
# NOTES
|
16 |
|
17 |
This model is a fine-tuned version of [athirdpath/Harmonia-20B](https://huggingface.co/athirdpath/Harmonia-20B) on the HF No Robots dataset.
|
18 |
It achieves the following results on the evaluation set:
|