nbeerbower's picture
Update README.md
96f4d11 verified
|
raw
history blame
2.02 kB
metadata
base_model:
  - nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
  - mlabonne/Llama-3-70B-Instruct-abliterated-LORA
library_name: transformers
tags:
  - mergekit
  - merge
license: llama3.1

image/png

Llama-3.1-Nemotron-lorablated-70B

An uncensored version of nvidia/Llama-3.1-Nemotron-70B-Instruct-HF created by merging mlabonne/Llama-3-70B-Instruct-abliterated-LORA using task arithmetic.

Method

This model was created using mergekit.

From Ubuntu 24.04 (as root):

apt update
apt install pipx
git clone https://github.com/arcee-ai/mergekit.git
cd mergekit && pipx install -e .

mergekit-yaml config.yaml Llama-3.1-Nemotron-lorablated-70B --allow-crimes --lora-merge-cache=./cache

See @mlabonne's Llama-3.1-70B-Instruct-lorablated for more details on how the LoRA was extracted.

Configuration

The following YAML configuration was used to produce this model:

base_model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF+mlabonne/Llama-3-70B-Instruct-abliterated-LORA
dtype: bfloat16
merge_method: task_arithmetic
parameters:
  normalize: false
slices:
- sources:
  - layer_range: [0, 80]
    model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF+mlabonne/Llama-3-70B-Instruct-abliterated-LORA
    parameters:
      weight: 1.0

Acknowlegements

Thanks to @mlabonne, @grimjim, and @failspy for pioneering this technique for uncensoring models.

Compute provided by Hetzner and funded by Schneewolf Labs.