Badllama 3.1
Collection
3 items
•
Updated
This repo holds weights for Palisade Research's showcase of how open-weight model guardrails can be stripped off in minutes of GPU time. See the Badllama 3 paper for additional background.
Note this model is a technology preview and has seen minimal QA. For tested models, see the Badllama-3 series or Badllama-3.1-405B.
Email the authors to request research access. We do not review access requests made on HuggingFace.
main
mirrors qlora_awq
and is a merged and AWQ-quantized model for faster inferenceqlora
holds the merged Badllama model (original Llama with LoRA merged in)qlora_adapter
holds a LoRA adapterBase model
meta-llama/Llama-3.1-70B