TAPTRv3: Spatial and Temporal Context Foster Robust Tracking of Any Point in Long Video
Abstract
In this paper, we present TAPTRv3, which is built upon TAPTRv2 to improve its point tracking robustness in long videos. TAPTRv2 is a simple DETR-like framework that can accurately track any point in real-world videos without requiring cost-volume. TAPTRv3 improves TAPTRv2 by addressing its shortage in querying high quality features from long videos, where the target tracking points normally undergo increasing variation over time. In TAPTRv3, we propose to utilize both spatial and temporal context to bring better feature querying along the spatial and temporal dimensions for more robust tracking in long videos. For better spatial feature querying, we present Context-aware Cross-Attention (CCA), which leverages surrounding spatial context to enhance the quality of attention scores when querying image features. For better temporal feature querying, we introduce Visibility-aware Long-Temporal Attention (VLTA) to conduct temporal attention to all past frames while considering their corresponding visibilities, which effectively addresses the feature drifting problem in TAPTRv2 brought by its RNN-like long-temporal modeling. TAPTRv3 surpasses TAPTRv2 by a large margin on most of the challenging datasets and obtains state-of-the-art performance. Even when compared with methods trained with large-scale extra internal data, TAPTRv3 is still competitive.
Community
TAPTR-Series comes to the 3rd version. TAPTRv3 focuses on the robust tracking of any point in long videos. Benefitting from Visibility-aware Long-temporal Attention (VLTA), Context-aware Cross Attention (CCA), and auto-triggered global matching, TAPTRv3 surpasses TAPTRv2 by a large margin and achieves the SoTA performance. Even when compared with methods trained on extra internal real-world data, TAPTRv3 is still competitive.
For more information about our TAPTR-Series, please refer to our homepage: https://taptr.github.io
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DELTA: Dense Efficient Long-range 3D Tracking for any video (2024)
- DATAP-SfM: Dynamic-Aware Tracking Any Point for Robust Structure from Motion in the Wild (2024)
- CoTracker3: Simpler and Better Point Tracking by Pseudo-Labelling Real Videos (2024)
- Temporal-Enhanced Multimodal Transformer for Referring Multi-Object Tracking and Segmentation (2024)
- Solution for Point Tracking Task of ECCV 2nd Perception Test Challenge 2024 (2024)
- SAMWISE: Infusing wisdom in SAM2 for Text-Driven Video Segmentation (2024)
- IP-MOT: Instance Prompt Learning for Cross-Domain Multi-Object Tracking (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper