Edit model card

EVA Qwen2.5 14B 0.1

A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-14B on mixture of synthetic and natural data.
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.

Version 0.1 notes:
Dataset was deduped and cleaned from version 0.0, sequence length was also increased. Resulting model seems to be stabler, and 0.0 problems with handling short inputs and min_p sampling seem to be gone.
This version seems to be more or less optimal for the current data and available compute.

Note: using quantized KV cache with Qwen2.5 is not recommended and can lead to degraded output quality. On the other hand, Qwen's KV cache is already light enough, so using f16 for it shouldn't be problematic.

Prompt format is ChatML.


Recommended sampler values:

  • Temperature: 1
  • Typical-P: 0.9
  • Min-P: 0.05
  • Top-A: 0.2
  • Repetition Penalty: 1.03

Recommended SillyTavern presets (via CalamitousFelicitousness):


Training data:

  • Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's card for details.
  • Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.
  • A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe
  • A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe
  • A cleaned subset (~3k rows) of shortstories_synthlabels by Auri
  • Synthstruct and SynthRP datasets by Epiculous

Training time and hardware:

  • 3 days on 4xA6000

Model was trained by Kearm and Auri.

Special thanks:

  • to Gryphe, Lemmy, Kalomaze, Nopm and Epiculous for the data
  • to Alpindale for helping with FFT config for Qwen2.5
  • and to Allura-org for support and feedback on EVA models.
Downloads last month
1,752
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for EVA-UNIT-01/EVA-Qwen2.5-14B-v0.1

Base model

Qwen/Qwen2.5-14B
Finetuned
(16)
this model
Finetunes
1 model
Merges
5 models
Quantizations
3 models

Datasets used to train EVA-UNIT-01/EVA-Qwen2.5-14B-v0.1

Collection including EVA-UNIT-01/EVA-Qwen2.5-14B-v0.1