|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- digitalpipelines/wizard_vicuna_70k_uncensored |
|
--- |
|
|
|
# Overview |
|
Fine-tuned [OpenLLaMA-7B](https://huggingface.co/openlm-research/open_llama_7b) with an uncensored/unfiltered Wizard-Vicuna conversation dataset [digitalpipelines/wizard_vicuna_70k_uncensored](https://huggingface.co/datasets/digitalpipelines/wizard_vicuna_70k_uncensored). |
|
Used QLoRA for fine-tuning using the process outlined in https://georgesung.github.io/ai/qlora-ift/ |
|
|
|
- GPTQ quantized model can be found at [digitalpipelines/llama2_7b_chat_uncensored-GPTQ](https://huggingface.co/digitalpipelines/llama2_7b_chat_uncensored-GPTQ) |
|
- GGML 2, 3, 4, 5, 6 and 8-bit quanitized models for CPU+GPU inference of [digitalpipelines/llama2_7b_chat_uncensored-GGML](https://huggingface.co/digitalpipelines/llama2_7b_chat_uncensored-GGML) |
|
|
|
# Prompt style |
|
The model was trained with the following prompt style: |
|
``` |
|
### HUMAN: |
|
Hello |
|
|
|
### RESPONSE: |
|
Hi, how are you? |
|
|
|
### HUMAN: |
|
I'm fine. |
|
|
|
### RESPONSE: |
|
How can I help you? |
|
... |
|
``` |
|
|