qaihm-bot commited on
Commit
01460bf
1 Parent(s): 28ce127

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +7 -12
README.md CHANGED
@@ -31,10 +31,12 @@ More details on model performance across various devices, can be found
31
  - Model size: 45.1 MB
32
 
33
 
 
 
34
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
35
  | ---|---|---|---|---|---|---|---|
36
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 8.641 ms | 8 - 10 MB | FP16 | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite)
37
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 8.21 ms | 5 - 20 MB | FP16 | NPU | [FastSam-S.so](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.so)
38
 
39
 
40
  ## Installation
@@ -92,19 +94,11 @@ device. This script does the following:
92
  python -m qai_hub_models.models.fastsam_s.export
93
  ```
94
 
95
- ```
96
- Profile Job summary of FastSam-S
97
- --------------------------------------------------
98
- Device: Snapdragon X Elite CRD (11)
99
- Estimated Inference Time: 9.29 ms
100
- Estimated Peak Memory Range: 4.71-4.71 MB
101
- Compute Units: NPU (286) | Total (286)
102
 
103
 
104
- ```
105
  ## How does this work?
106
 
107
- This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/FastSam-S/export.py)
108
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
109
  on-device. Lets go through each step below in detail:
110
 
@@ -181,6 +175,7 @@ spot check the output with expected output.
181
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
182
 
183
 
 
184
  ## Run demo on a cloud-hosted device
185
 
186
  You can also run the demo on-device.
@@ -217,7 +212,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
217
  ## License
218
  - The license for the original implementation of FastSam-S can be found
219
  [here](https://github.com/CASIA-IVA-Lab/FastSAM/blob/main/LICENSE).
220
- - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
221
 
222
  ## References
223
  * [Fast Segment Anything](https://arxiv.org/abs/2306.12156)
 
31
  - Model size: 45.1 MB
32
 
33
 
34
+
35
+
36
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
37
  | ---|---|---|---|---|---|---|---|
38
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 8.7 ms | 8 - 38 MB | FP16 | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite)
39
+
40
 
41
 
42
  ## Installation
 
94
  python -m qai_hub_models.models.fastsam_s.export
95
  ```
96
 
 
 
 
 
 
 
 
97
 
98
 
 
99
  ## How does this work?
100
 
101
+ This [export script](https://aihub.qualcomm.com/models/fastsam_s/qai_hub_models/models/FastSam-S/export.py)
102
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
103
  on-device. Lets go through each step below in detail:
104
 
 
175
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
176
 
177
 
178
+
179
  ## Run demo on a cloud-hosted device
180
 
181
  You can also run the demo on-device.
 
212
  ## License
213
  - The license for the original implementation of FastSam-S can be found
214
  [here](https://github.com/CASIA-IVA-Lab/FastSAM/blob/main/LICENSE).
215
+ - The license for the compiled assets for on-device deployment can be found [here](https://github.com/CASIA-IVA-Lab/FastSAM/blob/main/LICENSE)
216
 
217
  ## References
218
  * [Fast Segment Anything](https://arxiv.org/abs/2306.12156)