Suparious commited on
Commit
05fca4a
1 Parent(s): 607ca9c

Add model avatar

Browse files
Files changed (1) hide show
  1. README.md +87 -3
README.md CHANGED
@@ -37,7 +37,91 @@ base_model: alpindale/Mistral-7B-v0.2
37
  ---
38
  # macadeliccc/Mistral-7B-v0.2-OpenHermes AWQ
39
 
40
- **UPLOAD IN PROGRESS**
41
-
42
  - Model creator: [macadeliccc](https://huggingface.co/macadeliccc)
43
- - Original model: [Mistral-7B-v0.2-OpenHermes](https://huggingface.co/macadeliccc/Mistral-7B-v0.2-OpenHermes)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  ---
38
  # macadeliccc/Mistral-7B-v0.2-OpenHermes AWQ
39
 
 
 
40
  - Model creator: [macadeliccc](https://huggingface.co/macadeliccc)
41
+ - Original model: [Mistral-7B-v0.2-OpenHermes](https://huggingface.co/macadeliccc/Mistral-7B-v0.2-OpenHermes)
42
+
43
+ ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/6455cc8d679315e4ef16fbec/AbagOgU056oIB7S31XESC.webp)
44
+
45
+ ## Model Summary
46
+
47
+ SFT Training Params:
48
+ + Learning Rate: 2e-4
49
+ + Batch Size: 8
50
+ + Gradient Accumulation steps: 4
51
+ + Dataset: teknium/OpenHermes-2.5 (200k split contains a slight bias towards rp and theory of life)
52
+ + r: 16
53
+ + Lora Alpha: 16
54
+
55
+ Training Time: 13 hours on A100
56
+
57
+ ## How to use
58
+
59
+ ### Install the necessary packages
60
+
61
+ ```bash
62
+ pip install --upgrade autoawq autoawq-kernels
63
+ ```
64
+
65
+ ### Example Python code
66
+
67
+ ```python
68
+ from awq import AutoAWQForCausalLM
69
+ from transformers import AutoTokenizer, TextStreamer
70
+
71
+ model_path = "solidrust/Mistral-7B-v0.2-OpenHermes-AWQ"
72
+ system_message = "You are Hermes, incarnated as a powerful AI."
73
+
74
+ # Load model
75
+ model = AutoAWQForCausalLM.from_quantized(model_path,
76
+ fuse_layers=True)
77
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
78
+ trust_remote_code=True)
79
+ streamer = TextStreamer(tokenizer,
80
+ skip_prompt=True,
81
+ skip_special_tokens=True)
82
+
83
+ # Convert prompt to tokens
84
+ prompt_template = """\
85
+ <|im_start|>system
86
+ {system_message}<|im_end|>
87
+ <|im_start|>user
88
+ {prompt}<|im_end|>
89
+ <|im_start|>assistant"""
90
+
91
+ prompt = "You're standing on the surface of the Earth. "\
92
+ "You walk one mile south, one mile west and one mile north. "\
93
+ "You end up exactly where you started. Where are you?"
94
+
95
+ tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
96
+ return_tensors='pt').input_ids.cuda()
97
+
98
+ # Generate output
99
+ generation_output = model.generate(tokens,
100
+ streamer=streamer,
101
+ max_new_tokens=512)
102
+
103
+ ```
104
+
105
+ ### About AWQ
106
+
107
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
108
+
109
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
110
+
111
+ It is supported by:
112
+
113
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
114
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
115
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
116
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
117
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
118
+
119
+ ## Prompt template: ChatML
120
+
121
+ ```plaintext
122
+ <|im_start|>system
123
+ {system_message}<|im_end|>
124
+ <|im_start|>user
125
+ {prompt}<|im_end|>
126
+ <|im_start|>assistant
127
+ ```