Oumuamua-7b-RP / README.md
Aratako
model upload
5c418b4
|
raw
history blame
1.77 kB
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Oumuamua-7b-RP
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using Oumuamua-7b-RP\Oumuamua-RP-breadcrumbs as a base.
### Models Merged
The following models were included in the merge:
* Oumuamua-7b-RP\Oumuamua-7b-instruct-v2-RP-preset-Kunoichi
* Oumuamua-7b-RP\Oumuamua-7b-instruct-v2-RP-preset-LemonadeRP
* Oumuamua-7b-RP\Oumuamua-7b-instruct-v2-RP-preset-LoyalMacaroniMaid
* Oumuamua-7b-RP\Oumuamua-7b-instruct-v2-RP-preset-Berghof
* Oumuamua-7b-RP\Oumuamua-7b-instruct-v2-RP-preset-WestLake
* Oumuamua-7b-RP\Oumuamua-7b-instruct-v2-RP-preset-InfinityRP
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: Oumuamua-7b-RP\Oumuamua-RP-breadcrumbs
dtype: bfloat16
merge_method: model_stock
slices:
- sources:
- layer_range: [0, 32]
model: Oumuamua-7b-RP\Oumuamua-7b-instruct-v2-RP-preset-Kunoichi
- layer_range: [0, 32]
model: Oumuamua-7b-RP\Oumuamua-7b-instruct-v2-RP-preset-WestLake
- layer_range: [0, 32]
model: Oumuamua-7b-RP\Oumuamua-7b-instruct-v2-RP-preset-LemonadeRP
- layer_range: [0, 32]
model: Oumuamua-7b-RP\Oumuamua-7b-instruct-v2-RP-preset-InfinityRP
- layer_range: [0, 32]
model: Oumuamua-7b-RP\Oumuamua-7b-instruct-v2-RP-preset-LoyalMacaroniMaid
- layer_range: [0, 32]
model: Oumuamua-7b-RP\Oumuamua-7b-instruct-v2-RP-preset-Berghof
- layer_range: [0, 32]
model: Oumuamua-7b-RP\Oumuamua-RP-breadcrumbs
tokenizer_source: base
```