Edit model card

This model is recommended for RP, but you can use it as assistant as well.

New model! Version 2 brings less GPTims, but it's more the same, so I made this one. This is probably the best. Please, give it a try.


image/jpeg Image from Lewdiculous/EndlessRP-v3-7B-GGUF-Imatrix.

Prompt Format:

  • Extended Alpaca Format As for exemple from lemonilia/LimaRP-Mistral-7B-v0.1. Use ### Response: (length = huge) for exemple, to increase length. You can use Metharme or ChatML as well, but Alpaca is recommended.

Configuration

Source:

models:
  - model: mistralai/Mistral-7B-v0.1
  - model: Elizezen/Hameln-japanese-mistral-7B
    # This model brings very good creative output...
    parameters:
      density: 0.6
      weight: 0.25
  - model: fblgit/una-cybertron-7b-v3-OMA+.\toxic-dpo-v0.1-NoWarning-lora
    # Please, refer to model page for more information. Added a finetuned Toxic DPO to remove some boring warnings. 
    parameters:
      density: 0.6
      weight: 0.25
  - model: cgato/Thespis-CurtainCall-7b-v0.1.2+Doctor-Shotgun/mistral-v0.1-7b-pippa-metharme-lora
    # A good model compartible with ST. I added a PIPPA + METHARME lora to make it more 'balanced'.
    parameters:
      density: 0.6
      weight: 0.25
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
  int8_mask: true
dtype: bfloat16

As this mostly focuses on RP and creating stories, please don't expect it being smart with riddles or logical tests.

Downloads last month
55
GGUF
Model size
7.24B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .