File size: 1,018 Bytes
5fdd23f
 
 
 
 
 
 
 
 
 
4035a0c
 
f78c64d
5fdd23f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
license: apache-2.0
datasets:
- digitalpipelines/wizard_vicuna_70k_uncensored
---

# Overview
Fine-tuned [OpenLLaMA-7B](https://huggingface.co/openlm-research/open_llama_7b) with an uncensored/unfiltered Wizard-Vicuna conversation dataset [digitalpipelines/wizard_vicuna_70k_uncensored](https://huggingface.co/datasets/digitalpipelines/wizard_vicuna_70k_uncensored).
Used QLoRA for fine-tuning using the process outlined in https://georgesung.github.io/ai/qlora-ift/

- GPTQ quantized model can be found at [digitalpipelines/llama2_7b_chat_uncensored-GPTQ](https://huggingface.co/digitalpipelines/llama2_7b_chat_uncensored-GPTQ)
- GGML 2, 3, 4, 5, 6 and 8-bit quanitized models for CPU+GPU inference of [digitalpipelines/llama2_7b_chat_uncensored-GGML](https://huggingface.co/digitalpipelines/llama2_7b_chat_uncensored-GGML)

# Prompt style
The model was trained with the following prompt style:
```
### HUMAN:
Hello

### RESPONSE:
Hi, how are you?

### HUMAN:
I'm fine.

### RESPONSE:
How can I help you?
...
```