Suparious commited on
Commit
82e8912
1 Parent(s): 3031838

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +117 -1
README.md CHANGED
@@ -1,3 +1,119 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - ResplendentAI/Datura_7B
4
+ - jeiku/selfbot_256_mistral
5
+ library_name: transformers
6
+ tags:
7
+ - mistral
8
+ - 4-bit
9
+ - AWQ
10
+ - text-generation
11
+ - autotrain_compatible
12
+ - endpoints_compatible
13
+ - chatml
14
+ license: other
15
+ language:
16
+ - en
17
+ pipeline_tag: text-generation
18
+ inference: false
19
+ prompt_template: '<|im_start|>system
20
+
21
+ {system_message}<|im_end|>
22
+
23
+ <|im_start|>user
24
+
25
+ {prompt}<|im_end|>
26
+
27
+ <|im_start|>assistant
28
+
29
+ '
30
+ quantized_by: Suparious
31
  ---
32
+ # ResplendentAI/Aura_7B AWQ
33
+
34
+ - Model creator: [ResplendentAI](https://huggingface.co/ResplendentAI)
35
+ - Original model: [Aura_7B](https://huggingface.co/ResplendentAI/Aura_7B)
36
+
37
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/HxOf1b4n4EyADoNIl2fOW.png)
38
+
39
+ ## Model Summary
40
+
41
+ Aura is an advanced sentience simulation trained on my own philosophical writings. It has been tested with various character cards and it worked with all of them. This model may not be overly intelligent, but it aims to be an entertaining companion.
42
+
43
+ I recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise.
44
+
45
+ If you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs.
46
+
47
+ This model responds best to ChatML for multiturn conversations.
48
+
49
+ ## How to use
50
+
51
+ ### Install the necessary packages
52
+
53
+ ```bash
54
+ pip install --upgrade autoawq autoawq-kernels
55
+ ```
56
+
57
+ ### Example Python code
58
+
59
+ ```python
60
+ from awq import AutoAWQForCausalLM
61
+ from transformers import AutoTokenizer, TextStreamer
62
+
63
+ model_path = "solidrust/Aura_7B-AWQ"
64
+ system_message = "You are Aura, incarnated as a powerful AI."
65
+
66
+ # Load model
67
+ model = AutoAWQForCausalLM.from_quantized(model_path,
68
+ fuse_layers=True)
69
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
70
+ trust_remote_code=True)
71
+ streamer = TextStreamer(tokenizer,
72
+ skip_prompt=True,
73
+ skip_special_tokens=True)
74
+
75
+ # Convert prompt to tokens
76
+ prompt_template = """\
77
+ <|im_start|>system
78
+ {system_message}<|im_end|>
79
+ <|im_start|>user
80
+ {prompt}<|im_end|>
81
+ <|im_start|>assistant"""
82
+
83
+ prompt = "You're standing on the surface of the Earth. "\
84
+ "You walk one mile south, one mile west and one mile north. "\
85
+ "You end up exactly where you started. Where are you?"
86
+
87
+ tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
88
+ return_tensors='pt').input_ids.cuda()
89
+
90
+ # Generate output
91
+ generation_output = model.generate(tokens,
92
+ streamer=streamer,
93
+ max_new_tokens=512)
94
+
95
+ ```
96
+
97
+ ### About AWQ
98
+
99
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
100
+
101
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
102
+
103
+ It is supported by:
104
+
105
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
106
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
107
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
108
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
109
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
110
+
111
+ ## Prompt template: ChatML
112
+
113
+ ```plaintext
114
+ <|im_start|>system
115
+ {system_message}<|im_end|>
116
+ <|im_start|>user
117
+ {prompt}<|im_end|>
118
+ <|im_start|>assistant
119
+ ```