|
--- |
|
base_model: |
|
- mlabonne/NeuralDaredevil-8B-abliterated |
|
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS |
|
- Hastagaras/Halu-OAS-8B-Llama3 |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
license: llama3 |
|
license_link: LICENSE |
|
pipeline_tag: text-generation |
|
--- |
|
# Llama-3-Oasis-v1-OAS-8B |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
Each merge component was already subjected to Orthogonal Activation Steering (OAS) to mitigate refusals. The resulting text completion model should be versatile for both positive and negative roleplay scenarios and storytelling. Care should be taken when using this model. |
|
|
|
- mlabonne/NeuralDaredevil-8B-abliterated : high MMLU for reasoning |
|
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS : focus on roleplay |
|
- Hastagaras/Halu-OAS-8B-Llama3 : focus on storytelling |
|
|
|
Tested with the following sampler settings: |
|
- temperature 1-1.45 |
|
- minP 0.01-0.02 |
|
|
|
Quantified model files: |
|
- [static GGUF quants c/o mradermacher](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF) |
|
- [weighted/imatrix GGUF quants c/o mradermacher](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-i1-GGUF) |
|
- [8bpw exl2 quant](https://huggingface.co/grimjim/Llama-3-Oasis-v1-OAS-8B-8bpw_h8_exl2) |
|
|
|
Built with Meta Llama 3. |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were also included in the merge: |
|
* [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS) |
|
* [Hastagaras/Halu-OAS-8B-Llama3](https://huggingface.co/Hastagaras/Halu-OAS-8B-Llama3) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
base_model: mlabonne/NeuralDaredevil-8B-abliterated |
|
dtype: bfloat16 |
|
merge_method: task_arithmetic |
|
slices: |
|
- sources: |
|
- layer_range: [0, 32] |
|
model: mlabonne/NeuralDaredevil-8B-abliterated |
|
- layer_range: [0, 32] |
|
model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS |
|
parameters: |
|
weight: 0.3 |
|
- layer_range: [0, 32] |
|
model: Hastagaras/Halu-OAS-8B-Llama3 |
|
parameters: |
|
weight: 0.3 |
|
|
|
``` |
|
|