Papers
arxiv:2410.16268

SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree

Published on Oct 21
· Submitted by myownskyW7 on Oct 22
#2 Paper of the day
Authors:
,
,
,

Abstract

The Segment Anything Model 2 (SAM 2) has emerged as a powerful foundation model for object segmentation in both images and videos, paving the way for various downstream video applications. The crucial design of SAM 2 for video segmentation is its memory module, which prompts object-aware memories from previous frames for current frame prediction. However, its greedy-selection memory design suffers from the "error accumulation" problem, where an errored or missed mask will cascade and influence the segmentation of the subsequent frames, which limits the performance of SAM 2 toward complex long-term videos. To this end, we introduce SAM2Long, an improved training-free video object segmentation strategy, which considers the segmentation uncertainty within each frame and chooses the video-level optimal results from multiple segmentation pathways in a constrained tree search manner. In practice, we maintain a fixed number of segmentation pathways throughout the video. For each frame, multiple masks are proposed based on the existing pathways, creating various candidate branches. We then select the same fixed number of branches with higher cumulative scores as the new pathways for the next frame. After processing the final frame, the pathway with the highest cumulative score is chosen as the final segmentation result. Benefiting from its heuristic search design, SAM2Long is robust toward occlusions and object reappearances, and can effectively segment and track objects for complex long-term videos. Notably, SAM2Long achieves an average improvement of 3.0 points across all 24 head-to-head comparisons, with gains of up to 5.3 points in J&F on long-term video object segmentation benchmarks such as SA-V and LVOS. The code is released at https://github.com/Mark12Ding/SAM2Long.

Community

Paper author Paper submitter

We have released SAM2Long, a training-free enhancement to SAM 2:
🔥 Enhances for long-term video segmentation, less error accumulation facing occlusion/reappearance.
⚡️ With a training-free memory tree, it maintains segmentation paths dynamically, boosting resilience efficiently.
🤯 Achieves a significant improvement over SAM2 across 24 head-to-head comparisons on SA-V and LVOS.

Github: https://github.com/Mark12Ding/SAM2Long
Homepage: https://mark12ding.github.io/project/SAM2Long/
Technical Report: https://arxiv.org/abs/2410.16268

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.16268 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.16268 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.16268 in a Space README.md to link it from this page.

Collections including this paper 11