File size: 1,094 Bytes
3add990
9ca0cff
2f3e964
 
3add990
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
05370e6
 
 
 
9824ac8
05370e6
 
 
 
c4fb006
96abf63
eb3fae9
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
license: other
library_name: peft
base_model: TheBloke/Llama-2-13B-fp16
---
## Training procedure


The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions


- PEFT 0.5.0

I'm NOT the author of this work.

I cite anon :

```shell
Well, here it is. Storytelling Qlora. Trained on base llama2 13B but works flawlessly on other 13Bs. Idk about other sizes.
25MB of nsfw books, 60MB of sfwish ones.
No special formatting other than *** between chapters and ⁂ between books. Takes some text to get going but once you have some context filled, it feels way better for prose than raw llama or instruct models, imho.
Do whatever you want with it, I can't be bothered to maintain a HF page. WTFPL.
It's just shit from nai's archive
```

Credit to "anon49"