Suparious commited on
Commit
465bc08
1 Parent(s): 38636a1

Add Avatar and usage example

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md CHANGED
@@ -24,7 +24,90 @@ inference: false
24
  - Model creator: [migtissera](https://huggingface.co/migtissera)
25
  - Original model: [Tess-7B-v2.0](https://huggingface.co/migtissera/Tess-7B-v2.0)
26
 
 
 
27
  ## Model Summary
28
 
29
  Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-7B-v2.0 was trained on the Mistral-7B-v0.2 base.
30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  - Model creator: [migtissera](https://huggingface.co/migtissera)
25
  - Original model: [Tess-7B-v2.0](https://huggingface.co/migtissera/Tess-7B-v2.0)
26
 
27
+ ![Tesoro](https://huggingface.co/migtissera/Tess-7B-v2.0/resolve/main/Tesoro.png)
28
+
29
  ## Model Summary
30
 
31
  Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-7B-v2.0 was trained on the Mistral-7B-v0.2 base.
32
 
33
+ ## How to use
34
+
35
+ ### Install the necessary packages
36
+
37
+ ```bash
38
+ pip install --upgrade autoawq autoawq-kernels
39
+ ```
40
+
41
+ ### Example Python code
42
+
43
+ ```python
44
+ from awq import AutoAWQForCausalLM
45
+ from transformers import AutoTokenizer, TextStreamer
46
+
47
+ model_path = "solidrust/Tess-7B-v2.0-AWQ"
48
+ system_message = "You are Tess, incarnated as a powerful AI."
49
+
50
+ # Load model
51
+ model = AutoAWQForCausalLM.from_quantized(model_path,
52
+ fuse_layers=True)
53
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
54
+ trust_remote_code=True)
55
+ streamer = TextStreamer(tokenizer,
56
+ skip_prompt=True,
57
+ skip_special_tokens=True)
58
+
59
+ # Convert prompt to tokens
60
+ prompt_template = """\
61
+ <|im_start|>system
62
+ {system_message}<|im_end|>
63
+ <|im_start|>user
64
+ {prompt}<|im_end|>
65
+ <|im_start|>assistant"""
66
+
67
+ prompt = "You're standing on the surface of the Earth. "\
68
+ "You walk one mile south, one mile west and one mile north. "\
69
+ "You end up exactly where you started. Where are you?"
70
+
71
+ tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
72
+ return_tensors='pt').input_ids.cuda()
73
+
74
+ # Generate output
75
+ generation_output = model.generate(tokens,
76
+ streamer=streamer,
77
+ max_new_tokens=512)
78
+
79
+ ```
80
+
81
+ ### About AWQ
82
+
83
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
84
+
85
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
86
+
87
+ It is supported by:
88
+
89
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
90
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
91
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
92
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
93
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
94
+
95
+ ## Prompt template: ChatML
96
+
97
+ ```plaintext
98
+ <|im_start|>system
99
+ {system_message}<|im_end|>
100
+ <|im_start|>user
101
+ {prompt}<|im_end|>
102
+ <|im_start|>assistant
103
+ ```
104
+
105
+ ## How to cite
106
+
107
+ ```bibtext
108
+ @misc{Genstruct,
109
+ url={[https://https://huggingface.co/NousResearch/Genstruct-7B](https://huggingface.co/NousResearch/https://huggingface.co/NousResearch/Genstruct-7B)},
110
+ title={Genstruct},
111
+ author={"euclaise"}
112
+ }
113
+ ```