File size: 748 Bytes
d64796a 7184fa4 ee7b17f 7184fa4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
---
datasets:
- Open-Orca/OpenOrca
- cerebras/SlimPajama-627B
- ehartford/dolphin
---
This model is a finetuned version of the DeciLM-6b-instruct on the Dolphin GPT4 Dataset
Please set naive_attention_prefill to true when loading this model.
**Example:**
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, AutoTokenizer
model_name = "NewstaR/Porpoise-6b-instruct"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
trust_remote_code=True,
naive_attention_prefill=True,
)
model.config.use_cache = False
```
|