NovaSpark-exl2 / README.md
Epiculous's picture
Update README.md
28b3ae5 verified
metadata
library_name: transformers
license: apache-2.0
base_model:
  - grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B
tags:
  - generated_from_trainer
datasets:
  - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
  - anthracite-org/stheno-filtered-v1.1
  - PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
  - Gryphe/Sonnet3.5-Charcard-Roleplay
  - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
  - anthracite-org/kalo-opus-instruct-22k-no-refusal
  - anthracite-org/nopm_claude_writing_fixed
  - anthracite-org/kalo_opus_misc_240827
model-index:
  - name: Epiculous/NovaSpark
    results: []

exl2 quant (measurement.json in main branch)


check revisions for quants


image/png

Switching things up a bit since the last slew of models were all 12B, we now have NovaSpark! NovaSpark is an 8B model trained on GrimJim's abliterated version of arcee's SuperNova-lite. The hope is abliteration will remove some of the inherant refusals and censorship of the original model, however I noticed that finetuning on GrimJim's model undid some of the abliteration, therefore more than likely abiliteration will have to be reapplied to the resulting model to reinforce it.

Quants!

full / exl2 / gguf

Prompting

This model is trained on llama instruct template, the prompting structure goes a little something like this:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Context and Instruct

This model is trained on llama-instruct, please use that Context and Instruct template.

Current Top Sampler Settings

Smooth Creativity: Credit to Juelsman for researching this one!
Variant Chimera: Credit to Numbra!
Spicy_Temp
Violet_Twilight-Nitral-Special