Upload ./README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: apache-2.0
|
4 |
+
base_model:
|
5 |
+
- grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B
|
6 |
+
tags:
|
7 |
+
- generated_from_trainer
|
8 |
+
datasets:
|
9 |
+
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
|
10 |
+
- anthracite-org/stheno-filtered-v1.1
|
11 |
+
- PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
|
12 |
+
- Gryphe/Sonnet3.5-Charcard-Roleplay
|
13 |
+
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
|
14 |
+
- anthracite-org/kalo-opus-instruct-22k-no-refusal
|
15 |
+
- anthracite-org/nopm_claude_writing_fixed
|
16 |
+
- anthracite-org/kalo_opus_misc_240827
|
17 |
+
model-index:
|
18 |
+
- name: Epiculous/NovaSpark
|
19 |
+
results: []
|
20 |
+
---
|
21 |
+
### exl2 quant (measurement.json in main branch)
|
22 |
+
---
|
23 |
+
### check revisions for quants
|
24 |
+
---
|
25 |
+
|
26 |
+
|
27 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64adfd277b5ff762771e4571/pnFt8anKzuycrmIuB-tew.png)
|
28 |
+
|
29 |
+
Switching things up a bit since the last slew of models were all 12B, we now have NovaSpark! NovaSpark is an 8B model trained on GrimJim's [abliterated](https://huggingface.co/grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B) version of arcee's [SuperNova-lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite).
|
30 |
+
The hope is abliteration will remove some of the inherant refusals and censorship of the original model, however I noticed that finetuning on GrimJim's model undid some of the abliteration, therefore more than likely abiliteration will have to be reapplied to the resulting model to reinforce it.
|
31 |
+
|
32 |
+
# Quants!
|
33 |
+
<strong>full</strong> / [exl2]() / [gguf]()
|
34 |
+
|
35 |
+
## Prompting
|
36 |
+
This model is trained on llama instruct template, the prompting structure goes a little something like this:
|
37 |
+
|
38 |
+
```
|
39 |
+
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
40 |
+
|
41 |
+
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
42 |
+
|
43 |
+
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
44 |
+
```
|
45 |
+
|
46 |
+
### Context and Instruct
|
47 |
+
This model is trained on llama-instruct, please use that Context and Instruct template.
|
48 |
+
|
49 |
+
### Current Top Sampler Settings
|
50 |
+
[Smooth Creativity](https://files.catbox.moe/0ihfir.json): Credit to Juelsman for researching this one!<br/>
|
51 |
+
[Variant Chimera](https://files.catbox.moe/h7vd45.json): Credit to Numbra!<br/>
|
52 |
+
[Spicy_Temp](https://files.catbox.moe/9npj0z.json) <br/>
|
53 |
+
[Violet_Twilight-Nitral-Special](https://files.catbox.moe/ot54u3.json) <br/>
|