File size: 2,469 Bytes
f6b3bed 012465c f6b3bed b819c60 f6b3bed 49fa889 195be95 3e5e8d4 24dc561 2dc0bcb acf0938 2dc0bcb 1362c0b c7802f6 d502160 7594018 dbadbf3 b3fc5cf a4a468a dbadbf3 014b007 0dcb6a1 3f5a441 b7f7121 f6b3bed 5c5c995 f6b3bed 0dcb6a1 f6b3bed dea7323 f6b3bed 98868e7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
---
base_model:
- TeeZee/Kyllene-34B-v1.1
- Doctor-Shotgun/Nous-Capybara-limarpv3-34B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- en
---
# Kyllima 34B v1
![image/png](Kyllima.png)
## Model Details
This is a simple 50/50 merge of two of my favorite Yi 34B-based models for roleplay and creative writing, created using [mergekit](https://github.com/cg123/mergekit) on [Arcee.ai](https://app.arcee.ai/).
There's a good amount of Nous Capybara 34B in here, some Bagel DPO, Lima RP v3, and other goodness. Less sloppy thanks to Kyllene. 200k context. Uncensored.
Use with the metadata prompt format, Alpaca-LimaRP, or Vicuna.
Recommended settings: 0.8-1 temp, 1.1-1.2 rep pen, 1 top P, 0.05 min P, 40 top K.
Add \</s> to stop strings, and \/n \{{user}} or \[INST] if necessary.
I use a slightly modified version of RisuAI's default system prompt with good results. I suggest adding a couple lines to the system prompt telling the model to write in complete sentences, and NOT to write prompts to itself.
It's sensitive to small changes in settings and to the style/format of your own writing.
The original upload had a broken tokenizer. If you downloaded before 10/2/24, please re-download.
Static GGUF available [here](https://huggingface.co/sirmyrrh/Kyllima-34B-v1-GGUF) or [here](https://huggingface.co/mradermacher/Kyllima-34B-v1-GGUF).
Imatrix GGUF available [here](https://huggingface.co/mradermacher/Kyllima-34B-v1-i1-GGUF). With thanks to [mradermacher](https://huggingface.co/mradermacher/).
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [TeeZee/Kyllene-34B-v1.1](https://huggingface.co/TeeZee/Kyllene-34B-v1.1) as a base.
### Models Merged
The following models were included in the merge:
* [TeeZee/Kyllene-34B-v1.1](https://huggingface.co/TeeZee/Kyllene-34B-v1.1)
* [Doctor-Shotgun/Nous-Capybara-limarpv3-34B](https://huggingface.co/Doctor-Shotgun/Nous-Capybara-limarpv3-34B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: TeeZee/Kyllene-34B-v1.1
chat_template: auto
dtype: float16
merge_method: ties
models:
- model: TeeZee/Kyllene-34B-v1.1
parameters:
density: 0.5
weight: 0.5
- model: Doctor-Shotgun/Nous-Capybara-limarpv3-34B
parameters:
density: 0.5
weight: 0.5
parameters:
embed_slerp: true
int8_mask: true
normalize: false
tokenizer_source: base
``` |