Edit model card

Description:

This model is to test some of my data projects. It is a multipurpose chat / chat instruct hybrid model and can be seen as a prototype of Dan's PersonalityEngine Mk. 2. It has been trained on a wide array of one shot instructions, multi round instructions, role playing scenarios, and text adventure games. It is based on Holodeck-1

Prompt format:

Pygmalion / Metharme

The prompt should start with the cursor on the same line directly after "<|model|>" with no space. The following are all valid formats and can be extended to as many rounds as desired.

<|system|>system message here<|user|>user message here<|model|>
<|system|>system message here<|user|>user message here<|model|>model message<|user|>user message here<|model|>
<|system|>system message here<|model|>
<|system|>system message here<|model|>model message<|user|>user message here<|model|>

Some example prompts and their resulting outputs:

Prompt:
<|system|>The following is a transcript between a helpful assistant and a user.<|user|>Why is the sky blue?<|model|>
Output:
The color of the sky appears blue because it's an optical effect caused by the scattering of sunlight in Earth's atmosphere. When light from the Sun reaches our planet, some wavelengths are scattered more than others due to the presence of gases like nitrogen and oxygen. Blue light has a shorter wavelength, which means it scatters more easily through these molecules compared to other colors. This process makes the sky appear blue when we look up at it.
Prompt:
<|system|>You are a Virtual Story Generator. You take the user's input and create an excellent and captivating story that goes in that direction. Use an abundance of sensory descriptions and eloquent prose.<|user|>Alpha Centauri has fallen, to the bears. This is a point of view tale about a soldier on the ground.<|model|>
Output:
The air was cold and damp, heavy with the stench of blood and cordite. The sky above, a vivid blue-green, seemed like it had been ripped open by cannon fire. Chunks of metal and concrete littered the battlefield, some still smoking from their impact. A few soldiers lay motionless, twisted and torn. Others writhed in agony or crawled away, trailing gore behind them.

The soldier crouched down next to his wounded comrade, cursing softly under his breath as he examined the man's injuries. His face was pale, eyes glazed over, lips moving but no words coming out. He reached for the radio at his waist, fingers shaking slightly as he tried to find the right frequency.

"This is Bravo Charlie 3, do you copy?" There was nothing but static in response. "Bravo Charlie 3, this is Charlie Victor 10, do you read me? Over." More silence.

He sighed heavily and stood up, scanning the horizon for any sign of rescue. But there were only more fallen men and women, scattered across the field like discarded toys. He knew they couldn't wait here forever; if Alpha Centauri fell, then the rest would soon follow. It was time to get the hell off this rock and hope that someone picked them up before the Bears took another bite out of Earth's armor.

Gathering what little supplies he could find, he slung his rifle over one shoulder and set off into the unknown, hoping that somewhere out there was a way back home.
Prompt:
<|system|>You are a professional editor with decades of experience, help the user with any task they have for you.<|user|>Can you rewrite this to flow better? "I knew I probably shouldnt have done that but oh well"<|model|>
Output:
I realized that what I had just done might not have been the best choice, but I went ahead anyway.

Some quick and dirty training details:

  • Built with Axolotl
  • Sequence length: 4096
  • Training time: 16 hours
  • Hardware: 1x RTX 3090
  • Training type: QLoRA
  • PEFT R/A: 32/32

Credits:

Holodeck-1:

Thank you to Mr. Seeker and the Kobold AI team for the wonderful model Holodeck-1

Holodeck-1 Huggingface page

Skein Text Adventure Data:

Thank you to the Kobold AI community for curating the Skein dataset, which is pivotal to this model's capabilities.

Downloads last month
44
Safetensors
Model size
13B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Dans-Archive/Dans-MysteryModel-13b

Quantizations
2 models

Datasets used to train Dans-Archive/Dans-MysteryModel-13b

Spaces using Dans-Archive/Dans-MysteryModel-13b 5