license: apache-2.0
datasets:
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- Nopm/Opus_WritingStruct
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Gryphe/Sonnet3.5-Charcard-Roleplay
- Gryphe/ChatGPT-4o-Writing-Prompts
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- nothingiisreal/Reddit-Dirty-And-WritingPrompts
- allura-org/Celeste-1.x-data-mixture
- allura-org/shortstories_synthlabels
base_model:
- Qwen/Qwen2.5-14B
I have no idea what I’m doing… if this causes the apocalypse someone please let me know.
EVA-Qwen2.5-14B-v0.0 4.0bpw h8 EXL2
Includes measurement.json file for further quantization
Salesforce/xLAM-8x22b-r is on hold for now, probably early next year, need to save some money…
Original Model: https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.0
Original Model Card
EVA Qwen2.5 14B
A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-14B on mixture of synthetic and natural data.
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.
Prompt format is ChatML.
Recommended sampler values:
- Temperature: 0.7
- Top-P: 0.8
- Repetition Penalty: 1.03
Model appears to prefer lower temperatures (at least 0.8 and lower) and absolutely hate Min-P sampler.
Recommended SillyTavern presets (via CalamitousFelicitousness):
Training data:
- Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's card for details.
- Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.
- A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe
- A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe
- A cleaned subset (~3k rows) of shortstories_synthlabels by Auri
- Synthstruct and SynthRP datasets by Epiculous
Hardware used:
- 4xA6000 for 14 hours.
Model was trained by Kearm and Auri.
Special thanks:
- to Gryphe, Lemmy, Kalomaze, Nopm and Epiculous for the data
- to Alpindale for helping with FFT config for Qwen2.5
- and to InfermaticAI's community for their continued support for our endeavors