Edit model card

🪽 Hermes-3-Llama-3.1-8B-lorablated

image/png

70B version: mlabonne/Hermes-3-Llama-3.1-70B-lorablated

This is an uncensored version of NousResearch/Hermes-3-Llama-3.1-8B using lorablation.

You can see in the following example how Hermes 3 refuses to answer a legitimate question while the abliterated model complies:

image/png

The recipe is based on @grimjim's grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter (special thanks):

  1. Extraction: We extract a LoRA adapter by comparing two models: a censored Llama 3.1 (meta-llama/Meta-Llama-3.1-8B-Instruct) and an abliterated Llama 3.1 (mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated).
  2. Merge: We merge this new LoRA adapter using task arithmetic to the censored NousResearch/Hermes-3-Llama-3.1-8B to abliterate it.

image/png

See this article to learn more about abliteration.

âš¡ Quantization

🧩 Configuration

This model was merged using the task arithmetic merge method using NousResearch/Hermes-3-Llama-3.1-8B + Llama-3.1-8B-Instruct-abliterated-LORA as a base.

The following YAML configuration was used to produce this model:

base_model: NousResearch/Hermes-3-Llama-3.1-8B+Llama-3.1-8B-Instruct-abliterated-LORA
dtype: bfloat16
merge_method: task_arithmetic
parameters:
  normalize: false
slices:
- sources:
  - layer_range: [0, 32]
    model: NousResearch/Hermes-3-Llama-3.1-8B+Llama-3.1-8B-Instruct-abliterated-LORA
    parameters:
      weight: 1.0

You can reproduce this model using the following commands:

# Setup
git clone https://github.com/arcee-ai/mergekit.git
cd mergekit && pip install -e .
pip install bitsandbytes

# Extraction
mergekit-extract-lora mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated meta-llama/Meta-Llama-3.1-8B-Instruct Llama-3.1-8B-Instruct-abliterated-LORA --rank=64

# Merge using previous config
mergekit-yaml config.yaml Hermes-3-Llama-3.1-8B-lorablated --allow-crimes --lora-merge-cache=./cache
Downloads last month
856
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mlabonne/Hermes-3-Llama-3.1-8B-lorablated

Finetuned
(16)
this model
Finetunes
2 models
Merges
29 models
Quantizations
10 models

Spaces using mlabonne/Hermes-3-Llama-3.1-8B-lorablated 2

Collection including mlabonne/Hermes-3-Llama-3.1-8B-lorablated