Suparious commited on
Commit
26455b9
1 Parent(s): 51fba56

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +119 -1
README.md CHANGED
@@ -1,3 +1,121 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - merge
4
+ - mergekit
5
+ - lazymergekit
6
+ - Locutusque/OpenCerebrum-1.0-7b-DPO
7
+ - Locutusque/Hyperion-3.0-Mistral-7B-DPO
8
+ - quantized
9
+ - 4-bit
10
+ - AWQ
11
+ - transformers
12
+ - pytorch
13
+ - mistral
14
+ - autotrain_compatible
15
+ - endpoints_compatible
16
+ - text-generation-inference
17
+ - chatml
18
+ base_model:
19
+ - Locutusque/OpenCerebrum-1.0-7b-DPO
20
+ - Locutusque/Hyperion-3.0-Mistral-7B-DPO
21
+ language:
22
+ - en
23
+ quantized_by: Suparious
24
+ pipeline_tag: text-generation
25
+ model_creator: hydra-project
26
+ model_name: CerebrumHyperion-7B-DPO
27
+ inference: false
28
+ prompt_template: '<|im_start|>system
29
+
30
+ {system_message}<|im_end|>
31
+
32
+ <|im_start|>user
33
+
34
+ {prompt}<|im_end|>
35
+
36
+ <|im_start|>assistant
37
+
38
+ '
39
  ---
40
+ # hydra-project/CerebrumHyperion-7B-DPO AWQ
41
+
42
+ - Model creator: [hydra-project](https://huggingface.co/hydra-project)
43
+ - Original model: [CerebrumHyperion-7B-DPO](https://huggingface.co/hydra-project/CerebrumHyperion-7B-DPO)
44
+
45
+ ## Model Summary
46
+
47
+ CerebrumHyperion-7B-DPO is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
48
+ * [Locutusque/OpenCerebrum-1.0-7b-DPO](https://huggingface.co/Locutusque/OpenCerebrum-1.0-7b-DPO)
49
+ * [Locutusque/Hyperion-3.0-Mistral-7B-DPO](https://huggingface.co/Locutusque/Hyperion-3.0-Mistral-7B-DPO)
50
+
51
+ ## How to use
52
+
53
+ ### Install the necessary packages
54
+
55
+ ```bash
56
+ pip install --upgrade autoawq autoawq-kernels
57
+ ```
58
+
59
+ ### Example Python code
60
+
61
+ ```python
62
+ from awq import AutoAWQForCausalLM
63
+ from transformers import AutoTokenizer, TextStreamer
64
+
65
+ model_path = "solidrust/CerebrumHyperion-7B-DPO-AWQ"
66
+ system_message = "You are Cerebrum, incarnated a powerful AI."
67
+
68
+ # Load model
69
+ model = AutoAWQForCausalLM.from_quantized(model_path,
70
+ fuse_layers=True)
71
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
72
+ trust_remote_code=True)
73
+ streamer = TextStreamer(tokenizer,
74
+ skip_prompt=True,
75
+ skip_special_tokens=True)
76
+
77
+ # Convert prompt to tokens
78
+ prompt_template = """\
79
+ <|im_start|>system
80
+ {system_message}<|im_end|>
81
+ <|im_start|>user
82
+ {prompt}<|im_end|>
83
+ <|im_start|>assistant"""
84
+
85
+ prompt = "You're standing on the surface of the Earth. "\
86
+ "You walk one mile south, one mile west and one mile north. "\
87
+ "You end up exactly where you started. Where are you?"
88
+
89
+ tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
90
+ return_tensors='pt').input_ids.cuda()
91
+
92
+ # Generate output
93
+ generation_output = model.generate(tokens,
94
+ streamer=streamer,
95
+ max_new_tokens=512)
96
+
97
+ ```
98
+
99
+ ### About AWQ
100
+
101
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
102
+
103
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
104
+
105
+ It is supported by:
106
+
107
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
108
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
109
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
110
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
111
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
112
+
113
+ ## Prompt template: ChatML
114
+
115
+ ```plaintext
116
+ <|im_start|>system
117
+ {system_message}<|im_end|>
118
+ <|im_start|>user
119
+ {prompt}<|im_end|>
120
+ <|im_start|>assistant
121
+ ```