Edit model card

ConvNext-Tiny: Optimized for Mobile Deployment

Imagenet classifier and general purpose backbone

ConvNextTiny is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.

This model is an implementation of ConvNext-Tiny found here.

This repository provides scripts to run ConvNext-Tiny on Qualcomm® devices. More details on model performance across various devices, can be found here.

Model Details

  • Model Type: Image classification
  • Model Stats:
    • Model checkpoint: Imagenet
    • Input resolution: 224x224
    • Number of parameters: 28.6M
    • Model size: 109 MB
Model Device Chipset Target Runtime Inference Time (ms) Peak Memory Range (MB) Precision Primary Compute Unit Target Model
ConvNext-Tiny Samsung Galaxy S23 Snapdragon® 8 Gen 2 TFLITE 3.36 ms 0 - 2 MB FP16 NPU ConvNext-Tiny.tflite
ConvNext-Tiny Samsung Galaxy S23 Snapdragon® 8 Gen 2 QNN 3.919 ms 1 - 189 MB FP16 NPU ConvNext-Tiny.so
ConvNext-Tiny Samsung Galaxy S23 Snapdragon® 8 Gen 2 ONNX 13.496 ms 0 - 65 MB FP16 NPU ConvNext-Tiny.onnx
ConvNext-Tiny Samsung Galaxy S24 Snapdragon® 8 Gen 3 TFLITE 2.459 ms 0 - 212 MB FP16 NPU ConvNext-Tiny.tflite
ConvNext-Tiny Samsung Galaxy S24 Snapdragon® 8 Gen 3 QNN 2.835 ms 0 - 35 MB FP16 NPU ConvNext-Tiny.so
ConvNext-Tiny Samsung Galaxy S24 Snapdragon® 8 Gen 3 ONNX 9.529 ms 1 - 371 MB FP16 NPU ConvNext-Tiny.onnx
ConvNext-Tiny Snapdragon 8 Elite QRD Snapdragon® 8 Elite TFLITE 2.141 ms 0 - 61 MB FP16 NPU ConvNext-Tiny.tflite
ConvNext-Tiny Snapdragon 8 Elite QRD Snapdragon® 8 Elite QNN 2.437 ms 1 - 36 MB FP16 NPU Use Export Script
ConvNext-Tiny Snapdragon 8 Elite QRD Snapdragon® 8 Elite ONNX 8.676 ms 0 - 125 MB FP16 NPU ConvNext-Tiny.onnx
ConvNext-Tiny QCS8550 (Proxy) QCS8550 Proxy TFLITE 3.36 ms 0 - 2 MB FP16 NPU ConvNext-Tiny.tflite
ConvNext-Tiny QCS8550 (Proxy) QCS8550 Proxy QNN 3.624 ms 1 - 2 MB FP16 NPU Use Export Script
ConvNext-Tiny SA8255 (Proxy) SA8255P Proxy TFLITE 3.396 ms 0 - 2 MB FP16 NPU ConvNext-Tiny.tflite
ConvNext-Tiny SA8255 (Proxy) SA8255P Proxy QNN 3.703 ms 1 - 2 MB FP16 NPU Use Export Script
ConvNext-Tiny SA8775 (Proxy) SA8775P Proxy TFLITE 3.361 ms 0 - 2 MB FP16 NPU ConvNext-Tiny.tflite
ConvNext-Tiny SA8775 (Proxy) SA8775P Proxy QNN 3.673 ms 1 - 2 MB FP16 NPU Use Export Script
ConvNext-Tiny SA8650 (Proxy) SA8650P Proxy TFLITE 3.352 ms 0 - 19 MB FP16 NPU ConvNext-Tiny.tflite
ConvNext-Tiny SA8650 (Proxy) SA8650P Proxy QNN 3.631 ms 1 - 2 MB FP16 NPU Use Export Script
ConvNext-Tiny SA8295P ADP SA8295P TFLITE 10.471 ms 0 - 53 MB FP16 NPU ConvNext-Tiny.tflite
ConvNext-Tiny SA8295P ADP SA8295P QNN 9.481 ms 1 - 6 MB FP16 NPU Use Export Script
ConvNext-Tiny QCS8450 (Proxy) QCS8450 Proxy TFLITE 9.193 ms 0 - 199 MB FP16 NPU ConvNext-Tiny.tflite
ConvNext-Tiny QCS8450 (Proxy) QCS8450 Proxy QNN 9.845 ms 0 - 34 MB FP16 NPU Use Export Script
ConvNext-Tiny Snapdragon X Elite CRD Snapdragon® X Elite QNN 3.898 ms 1 - 1 MB FP16 NPU Use Export Script
ConvNext-Tiny Snapdragon X Elite CRD Snapdragon® X Elite ONNX 16.269 ms 58 - 58 MB FP16 NPU ConvNext-Tiny.onnx

Installation

This model can be installed as a Python package via pip.

pip install qai-hub-models

Configure Qualcomm® AI Hub to run this model on a cloud-hosted device

Sign-in to Qualcomm® AI Hub with your Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token.

With this API token, you can configure your client to run models on the cloud hosted devices.

qai-hub configure --api_token API_TOKEN

Navigate to docs for more information.

Demo off target

The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input.

python -m qai_hub_models.models.convnext_tiny.demo

The above demo runs a reference implementation of pre-processing, model inference, and post processing.

NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).

%run -m qai_hub_models.models.convnext_tiny.demo

Run model on a cloud-hosted device

In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following:

  • Performance check on-device on a cloud-hosted device
  • Downloads compiled assets that can be deployed on-device for Android.
  • Accuracy check between PyTorch and on-device outputs.
python -m qai_hub_models.models.convnext_tiny.export
Profiling Results
------------------------------------------------------------
ConvNext-Tiny
Device                          : Samsung Galaxy S23 (13)
Runtime                         : TFLITE                 
Estimated inference time (ms)   : 3.4                    
Estimated peak memory usage (MB): [0, 2]                 
Total # Ops                     : 328                    
Compute Unit(s)                 : NPU (328 ops)          

How does this work?

This export script leverages Qualcomm® AI Hub to optimize, validate, and deploy this model on-device. Lets go through each step below in detail:

Step 1: Compile model for on-device deployment

To compile a PyTorch model for on-device deployment, we first trace the model in memory using the jit.trace and then call the submit_compile_job API.

import torch

import qai_hub as hub
from qai_hub_models.models.convnext_tiny import 

# Load the model

# Device
device = hub.Device("Samsung Galaxy S23")

Step 2: Performance profiling on cloud-hosted device

After compiling models from step 1. Models can be profiled model on-device using the target_model. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics.

profile_job = hub.submit_profile_job(
    model=target_model,
    device=device,
)
        

Step 3: Verify on-device accuracy

To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device.

input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
    model=target_model,
    device=device,
    inputs=input_data,
)
    on_device_output = inference_job.download_output_data()

With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output.

Note: This on-device profiling and inference requires access to Qualcomm® AI Hub. Sign up for access.

Run demo on a cloud-hosted device

You can also run the demo on-device.

python -m qai_hub_models.models.convnext_tiny.demo --on-device

NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).

%run -m qai_hub_models.models.convnext_tiny.demo -- --on-device

Deploying compiled model to Android

The models can be deployed using multiple runtimes:

  • TensorFlow Lite (.tflite export): This tutorial provides a guide to deploy the .tflite model in an Android application.

  • QNN (.so export ): This sample app provides instructions on how to use the .so shared library in an Android application.

View on Qualcomm® AI Hub

Get more details on ConvNext-Tiny's performance across various devices here. Explore all available models on Qualcomm® AI Hub

License

  • The license for the original implementation of ConvNext-Tiny can be found here.
  • The license for the compiled assets for on-device deployment can be found here

References

Community

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) does not yet support pytorch models for this pipeline type.