qaihm-bot commited on
Commit
e978aba
1 Parent(s): 76e1fcc

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +12 -6
README.md CHANGED
@@ -34,10 +34,13 @@ More details on model performance across various devices, can be found
34
  - Model size: 4.73 MB
35
 
36
 
 
 
37
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  | ---|---|---|---|---|---|---|---|
39
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.66 ms | 0 - 5 MB | FP16 | NPU | [SqueezeNet-1_1.tflite](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.tflite)
40
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.705 ms | 0 - 7 MB | FP16 | NPU | [SqueezeNet-1_1.so](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.so)
 
41
 
42
 
43
  ## Installation
@@ -98,15 +101,17 @@ python -m qai_hub_models.models.squeezenet1_1.export
98
  Profile Job summary of SqueezeNet-1_1
99
  --------------------------------------------------
100
  Device: Snapdragon X Elite CRD (11)
101
- Estimated Inference Time: 0.82 ms
102
- Estimated Peak Memory Range: 0.57-0.57 MB
103
  Compute Units: NPU (70) | Total (70)
104
 
105
 
106
  ```
 
 
107
  ## How does this work?
108
 
109
- This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/SqueezeNet-1_1/export.py)
110
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
111
  on-device. Lets go through each step below in detail:
112
 
@@ -183,6 +188,7 @@ spot check the output with expected output.
183
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
184
 
185
 
 
186
  ## Run demo on a cloud-hosted device
187
 
188
  You can also run the demo on-device.
@@ -219,7 +225,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
219
  ## License
220
  - The license for the original implementation of SqueezeNet-1_1 can be found
221
  [here](https://github.com/pytorch/vision/blob/main/LICENSE).
222
- - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
223
 
224
  ## References
225
  * [SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size](https://arxiv.org/abs/1602.07360)
 
34
  - Model size: 4.73 MB
35
 
36
 
37
+
38
+
39
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
40
  | ---|---|---|---|---|---|---|---|
41
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.646 ms | 0 - 2 MB | FP16 | NPU | [SqueezeNet-1_1.tflite](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.tflite)
42
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.702 ms | 0 - 7 MB | FP16 | NPU | [SqueezeNet-1_1.so](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.so)
43
+
44
 
45
 
46
  ## Installation
 
101
  Profile Job summary of SqueezeNet-1_1
102
  --------------------------------------------------
103
  Device: Snapdragon X Elite CRD (11)
104
+ Estimated Inference Time: 0.80 ms
105
+ Estimated Peak Memory Range: 0.58-0.58 MB
106
  Compute Units: NPU (70) | Total (70)
107
 
108
 
109
  ```
110
+
111
+
112
  ## How does this work?
113
 
114
+ This [export script](https://aihub.qualcomm.com/models/squeezenet1_1/qai_hub_models/models/SqueezeNet-1_1/export.py)
115
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
116
  on-device. Lets go through each step below in detail:
117
 
 
188
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
189
 
190
 
191
+
192
  ## Run demo on a cloud-hosted device
193
 
194
  You can also run the demo on-device.
 
225
  ## License
226
  - The license for the original implementation of SqueezeNet-1_1 can be found
227
  [here](https://github.com/pytorch/vision/blob/main/LICENSE).
228
+ - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
229
 
230
  ## References
231
  * [SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size](https://arxiv.org/abs/1602.07360)