Files changed (1) hide show
  1. README.md +80 -33
README.md CHANGED
@@ -14,7 +14,6 @@ spaces: false
14
  language:
15
  - en
16
  ---
17
-
18
  # Quantized Octopus V2: On-device language model for super agent
19
 
20
  This repo includes two types of quantized models: **GGUF** and **AWQ**, for our Octopus V2 model at [NexaAIDev/Octopus-v2](https://huggingface.co/NexaAIDev/Octopus-v2)
@@ -25,63 +24,112 @@ This repo includes two types of quantized models: **GGUF** and **AWQ**, for our
25
 
26
 
27
  # GGUF Qauntization
28
- Run with [Ollama](https://github.com/ollama/ollama)
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  ```bash
31
- ollama run NexaAIDev/octopus-v2-Q4_K_M
32
  ```
33
 
34
  # AWQ Quantization
35
  Python example:
36
 
37
  ```python
 
38
  from awq import AutoAWQForCausalLM
39
- from transformers import AutoTokenizer, GemmaForCausalLM
40
  import torch
41
  import time
42
  import numpy as np
43
-
44
  def inference(input_text):
45
-
46
- tokens = tokenizer(
47
- input_text,
48
- return_tensors='pt'
49
- ).input_ids.cuda()
50
-
51
  start_time = time.time()
 
 
52
  generation_output = model.generate(
53
- tokens,
54
- do_sample=True,
55
- temperature=0.7,
56
- top_p=0.95,
57
- top_k=40,
58
- max_new_tokens=512
59
  )
60
  end_time = time.time()
61
-
62
- res = tokenizer.decode(generation_output[0])
63
- res = res.split(input_text)
64
  latency = end_time - start_time
65
- output_tokens = tokenizer.encode(res)
66
- num_output_tokens = len(output_tokens)
67
  throughput = num_output_tokens / latency
68
-
69
- return {"output": res[-1], "latency": latency, "throughput": throughput}
70
-
71
-
72
- model_id = "path/to/Octopus-v2-AWQ"
73
  model = AutoAWQForCausalLM.from_quantized(model_id, fuse_layers=True,
74
  trust_remote_code=False, safetensors=True)
75
- tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=False)
76
-
77
  prompts = ["Below is the query from the users, please call the correct function and generate the parameters to call the function.\n\nQuery: Can you take a photo using the back camera and save it to the default location? \n\nResponse:"]
78
-
79
  avg_throughput = []
80
  for prompt in prompts:
81
  out = inference(prompt)
82
  avg_throughput.append(out["throughput"])
83
  print("nexa model result:\n", out["output"])
84
-
85
  print("avg throughput:", np.mean(avg_throughput))
86
  ```
87
 
@@ -114,5 +162,4 @@ _Quantized with llama.cpp_
114
 
115
 
116
  **Acknowledgement**:
117
- We sincerely thank our community members, [Mingyuan](https://huggingface.co/ThunderBeee), [Zoey](https://huggingface.co/ZY6), [Brian](https://huggingface.co/JoyboyBrian), [Perry](https://huggingface.co/PerryCheng614), [Qi](https://huggingface.co/qiqiWav), [David](https://huggingface.co/Davidqian123) for their extraordinary contributions to this quantization effort.
118
-
 
14
  language:
15
  - en
16
  ---
 
17
  # Quantized Octopus V2: On-device language model for super agent
18
 
19
  This repo includes two types of quantized models: **GGUF** and **AWQ**, for our Octopus V2 model at [NexaAIDev/Octopus-v2](https://huggingface.co/NexaAIDev/Octopus-v2)
 
24
 
25
 
26
  # GGUF Qauntization
 
27
 
28
+ To run the models, please download them to your local machine using either git clone or [Hugging Face Hub](https://huggingface.co/docs/huggingface_hub/en/guides/download)
29
+ ```
30
+ git clone https://huggingface.co/NexaAIDev/Octopus-v2-gguf-awq
31
+ ```
32
+
33
+ ## Run with [llama.cpp](https://github.com/ggerganov/llama.cpp) (Recommended)
34
+
35
+ 1. **Clone and compile:**
36
+
37
+ ```bash
38
+ git clone https://github.com/ggerganov/llama.cpp
39
+ cd llama.cpp
40
+ # Compile the source code:
41
+ make
42
+ ```
43
+
44
+ 2. **Execute the Model:**
45
+
46
+ Run the following command in the terminal:
47
+
48
+ ```bash
49
+ ./main -m ./path/to/octopus-v2-Q4_K_M.gguf -n 256 -p "Below is the query from the users, please call the correct function and generate the parameters to call the function.\n\nQuery: Take a selfie for me with front camera\n\nResponse:"
50
+ ```
51
+
52
+ ## Run with [Ollama](https://github.com/ollama/ollama)
53
+
54
+ Since our models have not been uploaded to the Ollama server, please download the models and manually import them into Ollama by following these steps:
55
+
56
+ 1. Install Ollama on your local machine. You can also following the guide from [Ollama GitHub repository](https://github.com/ollama/ollama/blob/main/docs/import.md)
57
+
58
+ ```bash
59
+ git clone https://github.com/ollama/ollama.git ollama
60
+ ```
61
+
62
+ 2. Locate the local Ollama directory:
63
+ ```bash
64
+ cd ollama
65
+ ```
66
+
67
+ 3. Create a `Modelfile` in your directory
68
+ ```bash
69
+ touch Modelfile
70
+ ```
71
+
72
+ 4. In the Modelfile, include a `FROM` statement with the path to your local model, and the default parameters:
73
+
74
+ ```bash
75
+ FROM ./path/to/octopus-v2-Q4_K_M.gguf
76
+ PARAMETER temperature 0
77
+ PARAMETER num_ctx 1024
78
+ PARAMETER stop <nexa_end>
79
+ ```
80
+
81
+ 5. Use the following command to add the model to Ollama:
82
+ ```bash
83
+ ollama create octopus-v2-Q4_K_M -f Modelfile
84
+ ```
85
+
86
+ 6. Verify that the model has been successfully imported:
87
+ ```bash
88
+ ollama ls
89
+ ```
90
+
91
+ 7. Run the mode
92
  ```bash
93
+ ollama run octopus-v2-Q4_K_M "Below is the query from the users, please call the correct function and generate the parameters to call the function.\n\nQuery: Take a selfie for me with front camera\n\nResponse:"
94
  ```
95
 
96
  # AWQ Quantization
97
  Python example:
98
 
99
  ```python
100
+ from transformers import AutoTokenizer
101
  from awq import AutoAWQForCausalLM
 
102
  import torch
103
  import time
104
  import numpy as np
 
105
  def inference(input_text):
 
 
 
 
 
 
106
  start_time = time.time()
107
+ input_ids = tokenizer(input_text, return_tensors="pt").to('cuda')
108
+ input_length = input_ids["input_ids"].shape[1]
109
  generation_output = model.generate(
110
+ input_ids["input_ids"],
111
+ do_sample=False,
112
+ max_length=1024
 
 
 
113
  )
114
  end_time = time.time()
115
+ # Decode only the generated part
116
+ generated_sequence = generation_output[:, input_length:].tolist()
117
+ res = tokenizer.decode(generated_sequence[0])
118
  latency = end_time - start_time
119
+ num_output_tokens = len(generated_sequence[0])
 
120
  throughput = num_output_tokens / latency
121
+ return {"output": res, "latency": latency, "throughput": throughput}
122
+ # Initialize tokenizer and model
123
+ model_id = "/path/to/Octopus-v2-AWQ-NexaAIDev"
124
+ tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=False)
 
125
  model = AutoAWQForCausalLM.from_quantized(model_id, fuse_layers=True,
126
  trust_remote_code=False, safetensors=True)
 
 
127
  prompts = ["Below is the query from the users, please call the correct function and generate the parameters to call the function.\n\nQuery: Can you take a photo using the back camera and save it to the default location? \n\nResponse:"]
 
128
  avg_throughput = []
129
  for prompt in prompts:
130
  out = inference(prompt)
131
  avg_throughput.append(out["throughput"])
132
  print("nexa model result:\n", out["output"])
 
133
  print("avg throughput:", np.mean(avg_throughput))
134
  ```
135
 
 
162
 
163
 
164
  **Acknowledgement**:
165
+ We sincerely thank our community members, [Mingyuan](https://huggingface.co/ThunderBeee), [Zoey](https://huggingface.co/ZY6), [Brian](https://huggingface.co/JoyboyBrian), [Perry](https://huggingface.co/PerryCheng614), [Qi](https://huggingface.co/qiqiWav), [David](https://huggingface.co/Davidqian123) for their extraordinary contributions to this quantization effort.