GGUF
Not-For-All-Audiences
nsfw
Edit model card

image/png

Merge of Amethyst 13B and Emerald 13B.

In addition, LimaRP v3 was used, is it recommanded to read the documentation.

Description

This repo contains quantized files of Emerhyst-20B.

Models and loras used

  • PygmalionAI/pygmalion-2-13b
  • Xwin-LM/Xwin-LM-13B-V0.1
  • The-Face-Of-Goonery/Huginn-13b-FP16
  • zattio770/120-Days-of-LORA-v2-13B
  • lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

LimaRP v3 usage and suggested settings

image/png

You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length:

image/png

Special thanks to Sushi.

If you want to support me, you can here.

Downloads last month
46
GGUF
Model size
20B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .