Guidance Needed: Utilizing a TFLite Model in Python Environment and Extracting Keypoints from Output Tensor
I am looking to utilize a TFLite model within a Python environment. Could someone please guide me on how to utilize this model effectively? Specifically, I am encountering a (1, 28, 28, 38) output tensor. Could you advise me on the process to extract keypoints from this output tensor?
You can run demo on-device in our documentation. You can also see our code for demo.py and app.py in our repo for open pose to understand how the output tensor is processed. Thank you!
Please double check that the input image you're using in Python is (224, 224) dimensions. The paf, heatmap shapes in python if you run the demo should be
torch.Size([1, 38, 28, 28])
torch.Size([1, 19, 28, 28])
which is the same as what you get on android except transposed since torch and tflite use different channel format conventions.
Thank you for your reply, but how can I convert the output tensor from TFLite to keypoints on Android?
You can follow the logic in the python code and try and write android code that mimics the behavior