Edit model card

Llama-3 Spellbound Instruct Tuning-Free

Updated Aspects

  • Trained on additional tokens
  • Improved mix of subject matter model was trained on
  • Trained for 1.5M additional tokens
  • Additional training on DPO dataset

Model Rationale

Llama 3 is a strong base model with strong world understanding and creativity. Additional instruct finetuning trades that world understanding and creativity for instruction following that Llama doesn't require in order to adhere to most forms of roleplay.

This model was trained on unstructured text only, no instruct related fine-tuning was performed.

Made by tryspellbound.com.

(tryspellbound.com does not currently use this model, it uses Claude 3 Sonnet.)

Features of this fine-tune for Llama 3:

  • Roleplaying in multi-turn stories where the history is presented in a single message
  • Dynamic switching of writing styles for different scenarios
  • Interpretation of formatting marks 'quote' and 'action'

Warning: The underlying model, Llama 3, was trained on data that included adult content. This fine-tune does not add additional guardrails and is not suitable for all environments.

Purpose of the Model

The main goal is to explore how presenting LLMs with history and instructions separately affects their performance, demonstrating:

  • Improved coherence in long conversations
  • Enhanced quality of character interactions
  • Decreased instruction adherence, which could be improved with additional training

Advanced prompting of the model

For advanced prompting, see this document

Downloads last month
38
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for hf-100/Llama-3-Spellbound-Instruct-8B-0.3

Merges
7 models
Quantizations
1 model