title
stringlengths
28
121
abstract
stringlengths
697
11.5k
introduction
stringlengths
382
11.9k
Dong_Fast_Monocular_Scene_Reconstruction_With_Global-Sparse_Local-Dense_Grids_CVPR_2023
Abstract Indoor scene reconstruction from monocular images has long been sought after by augmented reality and robotics developers. Recent advances in neural field representa-tions and monocular priors have led to remarkable re-sults in scene-level surface reconstructions. The reliance on Multilayer Perceptrons (MLP), however, significantly limits speed in training and rendering. In this work, we propose to directly use signed distance function (SDF) in sparse voxel block grids for fast and accurate scene recon-struction without MLPs. Our globally sparse and locally dense data structure exploits surfaces’ spatial sparsity, en-ables cache-friendly queries, and allows direct extensions to multi-modal data such as color and semantic labels. To apply this representation to monocular scene reconstruc-tion, we develop a scale calibration algorithm for fast geo-metric initialization from monocular depth priors. We apply differentiable volume rendering from this initialization to refine details with fast convergence. We also introduce effi-cient high-dimensional Continuous Random Fields (CRFs) to further exploit the semantic-geometry consistency be-tween scene objects. Experiments show that our approach is 10× faster in training and 100× faster in rendering while achieving comparable accuracy to state-of-the-art neural implicit methods.
1. Introduction Reconstructing indoor spaces into 3D representations is a key requirement for many real-world applications, includ-ing robot navigation, immersive virtual/augmented reality experiences, and architectural design. Particularly useful is reconstruction from monocular cameras which are the most prevalent and accessible to causal users. While much re-search has been devoted to this task, several challenges re-main. Conventional monocular reconstruction from multi-view RGB images uses patch matching [34], which takes hours to reconstruct even a relatively small scene. Several 3D re-*CMU RI. Work done during the internship at NVIDIA. †NVIDIA Research wall floorcabinet chairsofa tabledoor windowpicture curtain Figure 1. Color and semantic scene reconstruction from our sys-tem with monocular images and learned monocular priors. construction methods [38,46] have demonstrated fast recon-struction by applying 3D convolutional neural networks to feature volumes, but they have limited resolution and strug-gle to generalize to larger scenes. Recently, unified neural radiance fields [22] and neural implicit representations were developed for the purpose of accurate surface reconstruction from images [29, 43,47]. While this was successfully demonstrated on single objects, the weak photometric constraint leads to poor reconstruc-tion and slow convergence for large-scale scenes. Guo et al. [14] and Yu et al. [49] improved the quality and con-vergence speed of neural field reconstruction on large-scale scenes by incorporating learned geometrical cues like depth and normal estimation [11, 31], however, training and eval-uation remain inefficient. This is primarily because these approaches rely on MLPs and feature grids [23] that encode the entire scene rather than concentrating around surfaces. In contrast to MLPs, an explicit SDF voxel grid can be adaptively allocated around surfaces, and allows fast query and sampling. However, an efficient implementation of differentiable SDF voxel grids without MLPs is missing. Fridovich-Keil and Yu et al. [12] used an explicit density and color grid, but is limited to rendering small objects. Muller et al. [23] developed a feature grid with spatial hash-ing for fast neural rendering, but its backbone hash map is not collision-free, causing inevitable slow random access and inaccurate indexing at large scales. Dong et al. [10] pro-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4263 (a) COLMAP [34] (b) NeRF [22] (c) V olSDF [47] (d) NeuS [43] (e) ManhattanSDF [14] (f) MonoSDF-MLP [49] (g) MonoSDF-Grid [49] (h) Ours Figure 2. Qualitative reconstruction comparison on ScanNet [7]. While being 10× faster in training, we achieve similar reconstruction results to state-of-the-art MonoSDF [49], with fine details (see Fig. 9). posed a collision-free spatially hashed grid following Niess-neret al. [28], but lacks support for differentiable rendering. Several practical challenges hinder the implementation of an efficient differentiable data structure: 1. a collision-free spatial hash map on GPU that supports one-to-one indexing from positions to voxels; 2. differentiable trilinear inter-polations between spatially hashed voxels; 3. parallel ray marching and uniform sampling from a spatial hash map. Our approach: we address such challenges using a dif-ferentiable globally sparse andlocally dense voxel grid. We transform a collision-free GPU hash map [35] to a differen-tiable tensor indexer [30]. This generates a one-to-one map between positions and globally sparse voxel blocks around approximate surfaces, and enables skipping empty space for efficient ray marching and uniform sampling. We fur-ther manage locally dense voxel arrays within sparse voxel blocks for GPU cache-friendly contiguous data query via trilinear interpolation. As a result, using explicit SDF grids leads to fast SDF gradient computation in a single forward pass, which can further accelerate differentiable rendering. This new data structure presents a new challenge — we can only optimize grid parameters if they are allocated around surfaces. To resolve this, we make use of off-the-shelf monocular depth priors [11, 31] and design a novel ini-tialization scheme with global structure-from-motion (SfM) constraints to calibrate these unscaled predicted depths. It results in a consistent geometric initialization via volumet-ric fusion ready to be refined through differentiable volume rendering.We additionally incorporate semantic monocular pri-ors [17] to provide cues for geometric refinement in 3D. For instance, we use colors and semantics to guide the sharp-ening of normals around object boundaries, which in turn improves the quality of colors and semantics. We enforce these intuitive notions through our novel continuous Condi-tional Random Field (CRF). We use Monte Carlo samples on the SDF zero-crossings to create continuous CRF nodes and define pairwise energy functions to enforce local con-sistency of colors, normals, and semantics. Importantly, we define similarity in a high dimensional space that consists of coordinates, colors, normals, and semantics, to reject spa-tially close samples with contrasting properties. To make inference tractable, we follow Krahenbuhl et al. [16] and use variational inference, leading to a series of convolutions in a high-dimensional space. We implement an efficient per-mutohedral lattice convolution [1] using the collision-free GPU hashmap to power the continuous CRF inference. The final output of our system is a scene reconstruction with geometry, colors, and semantic labels, as shown in Fig. 1. Experiments show that our method is 10× faster in training, 100× faster in inference, and has comparable accuracy measured by F-scores against state-of-the-art im-plicit reconstruction systems [14, 49]. In summary, we pro-pose a fast scene reconstruction system for monocular im-ages. Our contributions include: • A globally sparse locally dense differentiable volumetric data structure that exploits surface spatial sparsity without an MLP; 4264 • A scale calibration algorithm that produces consistent ge-ometric initialization from unscaled monocular depths; • A fast monocular scene reconstruction system equipped with volume rendering and high dimensional continuous CRFs optimization.
Dashpute_Thermal_Spread_Functions_TSF_Physics-Guided_Material_Classification_CVPR_2023
Abstract Robust and non-destructive material classification is a challenging but crucial first-step in numerous vision appli-cations. We propose a physics-guided material classifica-tion framework that relies on thermal properties of the ob-ject. Our key observation is that the rate of heating and cooling of an object depends on the unique intrinsic proper-ties of the material, namely the emissivity and diffusivity. We leverage this observation by gently heating the objects in the scene with a low-power laser for a fixed duration and then turning it off, while a thermal camera captures measure-ments during the heating and cooling process. We then take this spatial and temporal “thermal spread function” (TSF) to solve an inverse heat equation using the finite-differences approach, resulting in a spatially varying estimate of dif-fusivity and emissivity. These tuples are then used to train a classifier that produces a fine-grained material label at each spatial pixel. Our approach is extremely simple re-quiring only a small light source (low power laser) and a thermal camera, and produces robust classification results with86% accuracy over 16classes1.
1. Introduction Material classification is an important task pertinent to a diverse set of fields including but not limited to medicine and biology [1], chip manufacturing, recycling [2, 3], land and weather monitoring using satellites, and vision and robotics. Robust material classification is particularly crit-ical in separating various parts of an object based on their constituent materials [2,3]. Common tools for material clas-sification span a large spectrum including simple tools such as infrared spectroscopy, hyperspectral imaging to more ex-otic tools such as ultrasound, and x-ray fluorescent imagers. Material classification primarily relies on various dimen-sions of light including bidirectional reflectance function 1Code: https://github.com/aniketdashpute/TSF laser dot. The TSF data is used to (c) estimate diffusivity map and Thermal CameraHeat source -laserUnknown material (a) DiffusivityAbsorption coeff(c)(d)(b)(e)Figure 1. Material classification with thermal properties. Ma-terials have unique thermodynamics that enables robust classifica-tion. We propose a simple setup composed of (a) a 60mW laser as a heat source and a thermal camera to capture the heat profile (in-set). This results in a (b) stack of images we call Thermal Spread Function (TSF) that encodes heating and cooling effect around the laser dot. The TSF data is used to (c) estimate diffusivity map and (d) the external heat source term using inverse Finite Difference Method, which is then used to (e) classify the material robustly. (BRDF) slices [4], color and NIR images [5], frequency and depth-dependent ToF distortion [6], spectral imaging methods [7,8], multi-modal methods [9], and thermal imag-ing [10]. Methods based on RGB images are popular due to availability of RGB cameras and large labled datasets, but suffer from lack of robustness. In contrast, spectrum-based imaging based methods enable accurate classification but often require complex optical systems such as hyper-spectral cameras, and are sensitive to external illumination conditions. Human perception of materials is often multi-modal, such as relying on touch and vision to accurately classify material. This act of touching further involves a thermody-namic exchange that relies on the material composition of This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1641 the object. A metallic object results in rapid conduction of heat whereas non-metallic objects like ones made of wood result in slower transfer rate. Thus the intrinsic thermal properties provides an insight into the material properties that is often missed or confused by vision alone. Previous work in contact ways of knowing conductivity -haptic sens-ing [11, 12], and haptic displays [13, 14] leveraged this idea by developing “an artificial fingertip”. The drawback of this method is that it is invasive -it requires to touch the scene and thus can lead to interfering with it. Thermal characteri-zation for recycling has also been done using a spectrometer and a fluxmeter [15]. Thermal Imaging methods enable contact-free estima-tion of thermal properties, thus allowing us to classify ma-terials rapidly and in a non-destructive manner. One of the most popular contact-less methods for determining thermal diffusivity is the laser flash method. A laser is flashed on a thin slice (microns thick) of a material and the temper-ature change is observed from the other side, providing a quantitative estimate of the thermal diffusivity or conduc-tivity [16, 17]. This is restrictive due to the constrained lab setup and requirement of thin slices. Thermal imaging has also been used for non-destructive infrastructure inspection where the difference in thermal behaviour of unaltered and defected zones allow defect detection [18]. We take inspiration from the contact-less methods and develop a non-invasive thermal imaging system for material classification. As opposed to previous methods, our method is robust enough to be used in uncontrolled environments, and not limited to constrained lab setups. We use a visible laser beam as an external heat source that shines on a mate-rial, which absorbs a fraction of this beam corresponding to optical wavelength . The absorption of this energy leads to a rise in temperature that shows up in the long wave infrared domain (LWIR) and is captured by the thermal camera. The thermal camera is used to capture the heating process, and once the heat source is off, its cooling (refer Fig. 1). We define the temperature transients obtained from the heating-cooling as its Thermal Spread Function (TSF) and use it for robustly classifying materials. A key challenge with using TSF for classifying materials is that a thermal camera requires a known emissivity (ε)(ra-tio of radiated energy of the object to that of a black-body) for accurately estimating the temperature. To overcome this ambiguity, we leverage a physically accurate heat diffusion equation (see Sec. 2) that carefully models the thermody-namic interactions between the ambient scene and the ob-ject. This estimated TSF is then used for training a material classifier which enables robust material classification. Our approach and main contributions When objects are heated through radiation on surface and allowed to cool down, they display characteristic tem-heat source to heat the object and observe its heating and thenHeat SourceLWIRCameraenergy f from sourceheat absorbed𝐹=𝜖!"𝑓𝜎𝑢!"#$=𝜎𝜖𝑢$temperature u𝑥𝑦𝑧 Figure 2. Heating and capturing process . We use an external heat source to heat the object and a thermal camera to observe the heating and cooling effect. Refer Sec. 2 for detailed explanation. perature changes. These changes are based on their initial temperature, surface absorption, and heat diffusivity. We inject heat through a small portion on the surface of a ma-terial which diffuses throughout the body over time. If we observe a small patch of material in the vicinity of injection, we observe the diffusion -both during the injection phase and during the cooling phase after no external heat is sup-plied. We call this varying 2D temperature profile as the Thermal Spread Function (TSF) of the material. We measure the TSF of the material through a Long Wave Infrared (LWIR) thermal camera. We derive diffu-sivity and an absorption factor from the TSF to characterize the material as these properties are independent of the ini-tial temperature of the object. Our main contributions are the following. • We first derive a physically accurate model that character-izes the Thermal Spread Functions (TSFs) as a function of initial temperature of the object and an object’s ther-modynamic properties. • We then use a Finite Differences (FD) Method to solve the inverse heat problem for recovering parameters related to diffusion, absorption and emission • Finally, we design and demonstrate a simple optical setup for non-invasively recovering the thermodynamic proper-ties and using them to classify materials.
Johari_ESLAM_Efficient_Dense_SLAM_System_Based_on_Hybrid_Representation_of_CVPR_2023
Abstract We present ESLAM, an efficient implicit neural represen-tation method for Simultaneous Localization and Mapping (SLAM). ESLAM reads RGB-D frames with unknown cam-era poses in a sequential manner and incrementally recon-structs the scene representation while estimating the cur-rent camera position in the scene. We incorporate the latest advances in Neural Radiance Fields (NeRF) into a SLAM system, resulting in an efficient and accurate dense visual SLAM method. Our scene representation consists of multi-scale axis-aligned perpendicular feature planes and shal-low decoders that, for each point in the continuous space, decode the interpolated features into Truncated Signed Dis-tance Field (TSDF) and RGB values. Our extensive exper-iments on three standard datasets, Replica, ScanNet, and TUM RGB-D show that ESLAM improves the accuracy of 3D reconstruction and camera localization of state-of-the-art dense visual SLAM methods by more than 50%, while it runs up to ×10 faster and does not require any pre-training. Project page: https://www.idiap.ch/paper/eslam
1. Introduction Dense visual Simultaneous Localization and Mapping (SLAM) is a fundamental challenge in 3D computer vi-sion with several applications such as autonomous driving, robotics, and virtual/augmented reality. It is defined as con-structing a 3D map of an unknown environment while si-multaneously approximating the camera pose. While traditional SLAM systems [16, 41, 45, 55, 76, 77]mostly focus on localization accuracy, recent learning-based dense visual SLAM methods [2, 11, 25, 35, 60, 64, 65, 67, 81, 86] provide meaningful global 3D maps and show reasonable but limited reconstruction accuracy. Following the advent of Neural Radiance Fields (NeRF) [37] and the demonstration of their capacity to rea-son about the geometry of a large-scale scene [8, 13, 20, 22, 26, 75, 78] and reconstruct 3D surfaces [1, 29, 47, 48, 62, 71, 72, 82, 85], novel NeRF-based dense SLAM methods have been developed. In particular, iMAP [59] and NICE-SLAM [87] utilize neural implicit networks to achieve a consistent geometry representation. IMAP [59] represents the geometry with a single huge MLP, similar to NeRF [37], and optimizes the camera poses during the rendering process. NICE-SLAM [87] im-proves iMAP by storing the representation locally on voxel grids to prevent the forgetting problem. Despite promis-ing reconstruction quality, these methods are computation-ally demanding for real-time applications, and their ability to capture geometry details is limited. In addition, NICE-SLAM [87] uses frozen pre-trained MLPs, which limits its generalizability to novel scenes. We take NICE-SLAM [87] as a baseline and provide the following contributions: • We leverage implicit Truncated Signed Distance Field (TSDF) [1] to represent geometry, which converges noticeably faster than the common rendering-based representations like volume density [59] or occu-pancy [87] and results in higher quality reconstruction. • Instead of storing features on voxel grids, we propose employing multi-scale axis-aligned feature planes [6] This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17408 which leads to reducing the memory footprint growth rate w.r.t. scene side-length from cubic to quadratic. • We benchmark our method on three challenging datasets, Replica [57], ScanNet [12], and TUM RGB-D [58], to demonstrate the performance of our method in comparison to existing ones and provide an exten-sive ablation study to validate our design choices. Thanks to the inherent smoothness of representing the scene with feature planes, our method produces higher-quality smooth surfaces without employing explicit smoothness loss functions like [70]. Concurrent with our work, the followings also propose Radiance Fields-based SLAM systems: iDF-SLAM [38] also uses TSDF, but it is substantially slower and less ac-curate than NICE-SLAM [87]. Orbeez-SLAM [10] op-erates in real-time at the cost of poor 3D reconstruction. Compromising accuracy and quality, MeSLAM [27] intro-duces a memory-efficient SLAM. MonoNeuralFusion [88] proposes an incremental 3D reconstruction model, assum-ing that ground truth camera postures are available. Lastly, NeRF-SLAM [54] presents a monocular SLAM system with hierarchical volumetric Neural Radiance Fields opti-mized using an uncertainty-based depth loss.
Gan_CNVid-3.5M_Build_Filter_and_Pre-Train_the_Large-Scale_Public_Chinese_Video-Text_CVPR_2023
Abstract Owing to well-designed large-scale video-text datasets, recent years have witnessed tremendous progress in video-text pre-training. However, existing large-scale video-text datasets are mostly English-only. Though there are certain methods studying the Chinese video-text pre-training, they pre-train their models on private datasets whose videos and text are unavailable. This lack of large-scale public datasets and benchmarks in Chinese hampers the research and downstream applications of Chinese video-text pre-training. Towards this end, we release and benchmark CNVid-3.5M, a large-scale public cross-modal dataset con-taining over 3.5M Chinese video-text pairs. We summarize our contributions by three verbs, i.e., “Build”, “Filter”, and “Pre-train”: 1) To build a public Chinese video-text dataset, we collect over 4.5M videos from the Chinese websites. 2) To improve the data quality, we propose a novel method to filter out 1M weakly-paired videos, resulting in the CNVid-3.5M dataset. And 3) we benchmark CNVid-3.5M with three mainstream pixel-level pre-training archi-tectures. At last, we propose the Hard Sample Curriculum Learning strategy to promote the pre-training performance. To the best of our knowledge, CNVid-3.5M is the largest public video-text dataset in Chinese, and we provide the first pixel-level benchmarks for Chinese video-text pre-training. The dataset, codebase, and pre-trained models are available at https://github.com/CNVid/CNVid-3.5M.
1. Introduction Owing to well-designed large-scale datasets, video-text pre-training [15, 17, 19] has achieved superior performance in various downstream tasks, such as video-text retrieval [4, 10, 36], video question answering [27, 34, 42], and video captioning [1, 22, 30]. However, recent large-scale video-*Equal contribution. †Corresponding author. Large -Scale Video -Text Dataset English Chinese We FILTER the noisy data by a novel method! We BUILD and release one to fill this blank! We PRE -TRAIN various benchmarks for you to choose! There are nolarge public Chinese Video Datasets! Some videos have weak vision -text consistency! Video: a live show Text:🎶BGM🎶Mismatched Data HowTo WebVid ···None SolutionHow to use the dataset in actual applications ? SolutionSolutionPre-training DatasetCNVid -3.5M Applications Figure 1. Here presents the motivations of this paper, based on which we highly summarize our contributions with three verbs: “Build”, “Filter”, and “Pre-train”. text datasets are mostly English-only ( e.g., Howto100M [25] and WebVid-2.5M [4]). Though some methods [14,26, 45] turn to study the Chinese video-text pre-training, they pre-train their models on private datasets whose videos and text are unavailable. Therefore, the research towards Chinese video-text pre-training is still in its infancy due to the lack of large-scale public datasets. Towards this problem, directly translating English text into Chinese is a simple solution. However, it may result in unacceptable performance degradation for two reasons: 1) Translation errors are inevitable. Moreover, since most of the large-scale video-text datasets employ the Automatic Speech Recognition (ASR) system to generate text, the language translator would amplify the error from the incom-plete and noisy ASR text. And 2) there remains an intrinsic linguistic gap between English and Chinese. Many widely-used English idioms and slang can hardly find their Chinese counterparts, leading some translated text incomprehensible and even contrary to the original meaning. In this paper, we aim to release and benchmark a large-scale public Chinese video-text dataset to facilitate future researchers and the community. As illustrated in Figure 1, three verbs could highly summarize our contributions, i.e., “Build”, “Filter”, and “Pre-train”. Tobuild a large-scale Chinese video-text dataset, we collect over 4.5M videos from Chinese websites. All videos are associated with user-uploaded titles and ASR text. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14815 Wefilter out the weakly-paired data by a novel method to improve the data quality. As some work [25, 26] pointed out, the pre-training performance would suffer from the noisy ASR text that fails to accurately describe the video content. Unfortunately, the problem is raised with few practical solutions. Therefore, we employ a well-trained image-text model to evaluate the video-text consistency for three reasons: 1) The text information in existing image-text datasets [11] are usually manually-written titles or captions, whose consistency is guaranteed. 2) Some video-text pre-training architectures [23,35] are based upon image-text ones. And 3) it is cheap and efficient to “hire” a well-trained model to check millions of videos. In this way, we filter out about 1M weakly-paired videos based on the balance between the pre-training performance and efficiency, deriving the proposed CNVid-3.5M dataset. Wepre-train various models to benchmark our CNVid-3.5M dataset. Current video-text pre-training methods could be roughly divided into two categories: 1) feature-level pre-training methods [24, 33, 40] that employ offline video and textual feature extractors, and 2) pixel-level ones [4,15,36] that learn cross-modal representations end-to-end from raw videos and text. Since there remain domain gaps between pre-training datasets and frozen feature extrac-tors, pixel-level pre-training methods usually achieve better performance and have been widely employed in recent years. However, existing Chinese video-text pre-training methods [14, 26, 45] are all feature-level ones pre-trained onprivate datasets, limiting their contributions on the development of Chinese video-text pre-training techniques. Hence, we adopt three mainstream pixel-level pre-training frameworks, which are the first pixel-level benchmarks for Chinese video-text pre-training. Moreover, we propose the novel Hard Sample Curricu-lum Learning strategy to promote the pre-training perfor-mance. Since contrastive learning is a significant compo-nent in video-text pre-training, some methods [16, 18, 43] employ the hard sample mining [12,29] strategy to promote the cross-modal alignment. However, hard sample mining would bring side effects to pre-training when the model is far from convergence. Suppose that a model is incapable of discriminating the ground-truth video-text pairs, recklessly introducing hard negatives would lead to the sub-optimal performance. Inspired by the curriculum learning [32, 37] strategy that “starts small” and gradually “learns hard”, we combine these two strategies and propose the novel Hard Sample Curriculum Learning (HSCL). By gradually and smoothly emphasizing those hard samples, HSCL could effectively improve the pre-training performance. Our contributions are summarized in four folds: • To fill in the blank of large-scale public Chinese video-text datasets, we collect over 4.5M videos associated with titles and ASR text from the websites.• To improve the data quality, we propose a novel method to filter out 1M weakly-paired videos, result-ing in the CNVid-3.5M dataset. • To promote the pre-training performance, we propose the novel Hard Sample Curriculum Learning strategy for better cross-modal contrastive learning. • To the best of our knowledge, the constructed CNVid-3.5M is the largest public Chinese video-text dataset. Moreover, we provide the first Chinese pixel-level benchmarks based on CNVid-3.5M. The dataset, codebase, and benchmarks are available at https://github.com/CNVid/CNVid-3.5M.
Chen_iQuery_Instruments_As_Queries_for_Audio-Visual_Sound_Separation_CVPR_2023
Abstract Current audio-visual separation methods share a stan-dard architecture design where an audio encoder-decoder network is fused with visual encoding features at the en-coder bottleneck. This design confounds the learning of multi-modal feature encoding with robust sound decod-ing for audio separation. To generalize to a new instru-ment, one must fine-tune the entire visual and audio net-work for all musical instruments. We re-formulate the visual-sound separation task and propose Instruments as Queries (iQuery) with a flexible query expansion mech-anism. Our approach ensures cross-modal consistency and cross-instrument disentanglement. We utilize “visually named” queries to initiate the learning of audio queries and use cross-modal attention to remove potential sound source interference at the estimated waveforms. To gen-eralize to a new instrument or event class, drawing inspi-ration from the text-prompt design, we insert additional queries as audio prompts while freezing the attention mech-anism. Experimental results on three benchmarks demon-strate that our iQuery improves audio-visual sound source separation performance. Code is available at https: //github.com/JiabenChen/iQuery .
1. Introduction Humans use multi-modal perception to understand com-plex activities. To mimic this skill, researchers have studied audio-visual learning [3, 17, 33] by exploiting the synchro-nization and correlation between auditory and visual infor-mation. In this paper, we focus on the sound source sepa-ration task, where we aim to identify and separate different sound components within a given sound mixture [60, 74]. Following the “Mix-and-Separate” framework [32, 34, 81], we learn to separate sounds by mixing multiple audio sig-nals to generate an artificially complex auditory represen-tation and then use it as a self-supervised task to separate individual sounds from the mixture. The works [26, 53, 89] showed that visually-guided sound separation is achievableby leveraging visual information of the sound source. Prevalent architectures take a paradigm of a visual-conditioned encoder-decoder architecture [23, 26, 58, 88], where encoded features from audio and visual modalities are fused at the bottleneck for decoding to yield separated spectrogram masks. However, it is noticed that this design often creates a “muddy” sound and “cross-talk” that leaks from one instrument to another. To create a clean sound separation, one would like the audio-visual encoders to be (1) self-consistent within the music instrument and (2) con-trasting across. One approach [27] added critic functions explicitly to enforce these properties. Another method [99] used a two-step process with the second motion-conditioned generation process to filter out unwanted cross-talks. We call these approaches decoder-centric. Most recent works focus on addressing the “muddy” and “cross-talk” issue by improving fine details of audio-visual feature extraction: for example, adding human motion en-coding as in [23, 88, 99], or cross-modality representations [58] via self-supervised learning. Once the feature repre-sentations are learned, the standard encoder-decoder FCN style segmentation is used as an afterthought. We consider these methods feature-centric. The standard designs have two limitations. First, it is hard to balance decoder-centric and feature-centric approaches that enforce a common goal of cross-modality consistency and cross-instrument con-trast. Second, to learn a new musical instrument, one has to retrain the entire network via self-supervision. To tackle these limitations, we propose a query-based sound separation framework, iQuery. We recast this prob-lem from a query-based transformer segmentation view, where each query learns to segment one instrument, similar to visual segmentation [15, 16, 65, 78]. We treat each au-dio query as a learnable prototype that parametrically mod-els one sound class. We fuse visual modality with audio by “visually naming” the audio query: using object detec-tion to assign visual features to the corresponding audio query. Within the transformer decoder, the visually initial-ized queries interact with the audio features through cross-attention, thus ensuring cross-modality consistency. Self-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14675 𝜀!"#$ Motion-awareCross-AttentionSelf-AttentionAudio-aware Cross-AttentionAudio-Visual Transformer Decoder InputFramesx3NVisually-named AudioQueries𝑄! DetectedObjectMotionFeatureObjectFeatureMLP STFT AudioFeature Video EncoderObject DetectorImage EncoderAudio EncoderAudio DecoderAudioEmbeddingMask Predictions MMixAudioMixAudioSpectrum Separated Sound 1Separated Sound 2𝜀!𝐹! 𝐹"𝑘 MixAudio𝐹#MaskEmbedding⊕CorrespondingQuery………… ……Audio-Visual Feature ExtractionFigure 1. Pipeline of iQuery. Our system takes as input an audio mixture and its corresponding video frames, and disentangles separated sound sources for each video. Our pipeline consists of two main modules: an Audio-Visual Feature Extraction module which extracts audio, object, and motion features through three corresponding encoders, and an Audio-Visual Transformer module for sound separation. The query-based sound separation transformer has three key components: 1)“visually-named” audio queries are initialized by extracted object features, 2)cross-attention between the audio queries with static image features, dynamic motion features and audio features, 3) self-attention between the learned audio queries to ensure cross-instrument contrast. attention across the audio queries for different instruments implements a soft version of the cross-instrument contrast objective. With this design, we unify the feature-centric with the decoder-centric approach. How do we achieve generalizability? Motivated by re-cent success in fine-tuning domain transfer with the text-prompt [28] and visual-prompt designs [7, 35, 41, 86], we adaptively insert the additional queries as audio prompts to accommodate new instruments. With the audio-prompt de-sign, we freeze most of the transformer network parame-ters and only fine-tune the newly added query embedding layer. We conjecture that the learned prototype queries are instrument-dependent, while the cross/self-attention mech-anism in the transformer is instrument-independent. Our main contributions are: • To the best of our knowledge, we are the first to study the audio-visual sound separation problem from a tun-able query view to disentangle different sound sources explicitly through learnable audio prototypes in a mask transformer architecture. • To generalize to a new sound class, we design an audio prompt for fine-tuning with most of the transformer ar-chitecture frozen. • Extensive experiments and ablations verify the ef-fectiveness of our core designs for disentangle-ment, demonstrating performance gain for audio-visual sound source separation on three benchmarks.2. Related work Audio-Visual Sound Source Separation. Recent years have witnessed promising results of audio-visual multi-modality joint learning [49, 62, 67, 75, 83] in domains like audio-visual sound source localiza-tion [4, 5, 14, 36, 55, 61, 63, 93], audio-visual event localization [68, 76, 77, 95] and sound synthesis from videos [25, 52, 54, 80, 97]. Sound source separation, a challenging classical problem, has been researched exten-sively in the audio signal processing area [11, 22, 37, 40]. A well-known example is the cocktail party problem [31, 48] in speech domain [1, 21]. Works have been proposed recently for tasks like speech separation [2, 27, 39, 51, 70], active sound separation [45, 46] and on-screen sound sep-aration [25, 53, 71, 72]. Our work focuses on audio-visual sound separation. Recent audio-visual sound separation methods could be classified generally into two categories: feature-centric and decoder-centric as discussed in Sec. 1. Feature-centric methods exploit various ways for visual feature extraction selection to aid this multi-modality task. Some works consider frame-based appearance features (static frame features [24, 79, 89] or detected object re-gions [26, 66]) for extracting visual semantic cues ( e.g., instrument categories) to guide sound separation. [12, 13] adds embeddings from an audio-visual scene graph at the U-Net bottleneck to model the visual context of sound sources. Based on the assessment that motion signals 14676 Ground truthMaskPredicted MaskCCoLPredicted MaskOurs Ground truthSpectrogram Video FramesAccordion+Violin PredictedCCoLPredictedOurs Saxophone+ Acoustic Guitar Figure 2. Qualitative results on MUSIC test set. The first column shows the mixed video frames, the second to the fourth columns compare our predicted spectrogram masks against masks yielded by state-of-the-art algorithm [66] and ground truth masks, and the fifth to the seventh columns visualize separated spectrograms. [66] produces blurry masks and contains unseparated components from another sound source, while our system successfully generates accurate mask and clean spectrograms as the ground truth. could more tightly couple the moving sounding object with corresponding variations of sounds, recent approaches focus on including motion information into the pipeline (e.g., optical flow [88], and human pose [23,58]). Based on this, [94] proposes a framework to search for the optimal fusion strategy for multi-modal features. Decoder-centric methods explore prevention of “cross-talk” between the audio sources in the decoder stage. [99] designs a two-stage pipeline, where the second stage conducts a counterfactual synthesis through motion features to remove potentially leaked sound. The approach of [27] added critic functions explicitly to enforce cross-modal consistency and cross-instrument contrast. Vision Transformers. Motivated by transformer’s suc-cess in natural language processing [73], transformers were first introduced in computer vision for image classification as ViT [20]. Given the superior long-range modeling ca-pacity, many follow-up works [47, 69, 82] have upgraded ViT to achieve higher performance and widely surpassed convolutional neural networks. Further, transformer-based models are adopted for various downstream tasks, such as 2D object detection [9, 91, 100], semantic/instance seg-mentation [65, 78, 92], 3D object detection [50, 85], shape recognition [84, 90] and video understanding [6, 42]. Par-ticularly, following the pipeline from DETR [9], Mask-Former [16] and Mask2Former [15] represent each mask candidate as a learnable query and conduct parallel decod-ing for instance-level segmentation. However, only few ap-proaches [39, 58, 71, 72, 99] have extended transformer for audio-visual sound separation fields. [58] adopts a BERT[18] architecture to learn visual, pose, and audio feature rep-resentations. [99] designs an audio-motion transformer to refine sound separation results through audio-motion fea-ture fusion. These methods focus mainly on learning bet-ter contextualized multi-modality representations through an encoder transformer. In contrast, our mask transformer-based network focuses on the entire process of visual-audio separation task. We disentangle different sound sources through independent learnable query prototypes and seg-ment each time-frequency region on the spectrogram via mask prediction in an end-to-end fashion. 3. Method We first describe the formulation of the audio-visual sound separation task and introduce our pipeline iQuery briefly in Sec. 3.1. Then we introduce networks for learn-ing representations from visual and audio modalities in Sec. 3.2 and our proposed cross-modality cross-attention trans-former architecture for visual sound separation in Sec. 3.3. Finally, we introduce our adaptive query fine-tuning strat-egy through designs of flexible tunable queries in Sec. 3.4. 3.1. Overview As mentioned before, our goal is to disentangle the au-dio mixture concerning its corresponding sound sources in the given mixture by using so-called queries. Follow-ing previous works [21, 89], we adopt a commonly used “Mix-and-Separate” self-supervised source separation pro-cedure. Given Kvideo clips with accompanying audio signal: {(Vk, sk(t))}k∈[1,K], we create a sound mixture: 14677 smix(t) =PK k=1sk(t)as training data. Our disentan-glement goal is to separate sounds
Cho_Look_Around_for_Anomalies_Weakly-Supervised_Anomaly_Detection_via_Context-Motion_Relational_CVPR_2023
Abstract Weakly-supervised Video Anomaly Detection is the task of detecting frame-level anomalies using video-level labeled training data. It is difficult to explore class representative features using minimal supervision of weak labels with asingle backbone branch. Furthermore, in real-world sce-narios, the boundary between normal and abnormal is am-biguous and varies depending on the situation. F or exam-ple, even for the same motion of running person, the ab-normality varies depending on whether the surroundingsare a playground or a roadway. Therefore, our aim isto extract discriminative features by widening the relativegap between classes’ features from a single branch. In the proposed Class-Activate Feature Learning (CLA V), the fea-tures are extracted as per the weights that are implicitlyactivated depending on the class, and the gap is then en-larged through relative distance learning. Furthermore, asthe relationship between context and motion is importantin order to identify the anomalies in complex and diversescenes, we propose a Context–Motion Interrelation Mod-ule (CoMo), which models the relationship between the ap-pearance of the surroundings and motion, rather than uti-lizing only temporal dependencies or motion information.The proposed method shows SOTA performance on fourbenchmarks including large-scale real-world datasets, andwe demonstrate the importance of relational information byanalyzing the qualitative results and generalization ability.
1. Introduction Video anomaly detection (V AD) in surveillance systems refers to the identification of undefined, unusual, or unseenabnormal events (e.g., traffic accidents, robberies, and otherunforeseeable events) from amongst normal situations withtemporal intervals. Currently, numerous CCTVs installedin public places such as banks, streets, and buildings record This work was supported by the Institute of Information & communi-cations Technology Planning & Evaluation(IITP) grant funded by the Ko-rea government(MSIT) (No. 2021-0-00172, The development of human Re-identification and masked face recognition based on CCTV camera) 0RWLRQ &RQWH[W,QWHUDFWLRQ6SDFH 7HPSRUDO6SDFH E 5HODWLYH 'LVWDQFHOHDUQLQJ D &ODVV$FWLYDWH )HDWXUHV F &RQWH[W0RWLRQ ,QWHUUHODWLRQ G 1RUPDO H $EQRUPDO 3URMHFWLRQ 5HSURMHFWLRQ $QRPDO\6FRUH $QRPDO\6FRUH Figure 1. Concept of proposed method. We extract discrimina-tive features that (a) are activated according to normal or abnormalclasses, and (b) enlarge their gaps using relative distance learning.Furthermore, by projecting features into an interaction space, we(c) explore relationships between the context and motion informa-tion of the scene. For detecting anomalies, the proposed method considers not only motion but also its relationship with the context. For example, (d) shows a normal video with a physical fighting ina basketball game while (e) shows an abnormal fighting video. Thered highlighted ranges are ground-truth abnormal frames and ours(red line) accurately detects anomalies without false alarms. our daily life and play an important role in public safety. However, because it is time-consuming and laborious forhumans to pinpoint anomalies in petabytes of surveillancevideos or to monitor constantly, the V AD task, which pro-vides automatic and instantaneous responses, is a hot topicin the field of deep learning [5, 26]. Weakly-supervised V AD (WV AD) utilizes minimal knowledge about abnormal events through video-level la-beled training data that only has a label stating whether anabnormal event exists in each video clip or not. WV ADfaces several challenges. First, it is difficult for the networkto learn to classify anomalies at the frame-level through weak labeled training data. Therefore, most WV AD meth-ods [13, 20,31,35] learn through a Multiple Instance Learn-ing (MIL)-based approach. When normal and abnormalvideo clips are divided into multiple snippets and each is This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12137 contained in a negative and positive bag, there is at least one abnormal snippet in the positive bag. Therefore, the MIL approach assumes that the highest abnormality score in the positive bag derives from the abnormal snippet, and forces it to be 1 while the highest score in the negative bag is set to 0. However, given that 1) the boundary between normal and abnormal is ambiguous in the real world, there is a limit to regression learning that forces the predicted score of snippets to a fixed values. Tian et al. [33] and Wu et al. [37] forced the gap between classes through feature learning by enlarging the feature magnitude and adjusting the distance of the feature with the center feature, respectively. How-ever, 2)it is difficult to extract the discrepancy of fea-tures from a single-branch model for enlarging the gap (shown in Fig. 7). Another challenging issue neglected in previous studies is that in real-world scenarios, for a com-plex and diverse scene, the definition of ‘abnormal event’ can differ depending on the context and motion relation-ship. Zhu et al. [47] extracted appearance-invariant features by utilizing only optical flow data to focus on moving parts, while [24, 33,42] focused on temporal dependencies to con-sider multi-scale temporal information. However, 3) focus-ing only on motion or temporal information and even excluding appearance information leads to an incomplete understanding of complex scenes. In complex scenes, the boundary between normal and abnormal is ambiguous, and the distinction sometimes dif-fers depending on the situation. That is, rather than having a fixed explicit prior to the abnormal class, it is necessary to implicitly learn class representative features by relatively comparing each class. Furthermore, abnormal events oc-curring in the real world vary depending on the relationship between context and motion. For example, in Fig. 1, (d) a physical skirmish during a basketball game is a normal and acceptable event; but (e) a physical fight on the street is an abnormal event. Thus, the same motion has a differ-ent class depending on the relationship between motion and surrounding or appearance. Therefore, our motivation is to extract class-activated features by considering the relative boundary between classes and to understand the reciprocal relationship between context and motion information. To overcome the aforementioned challenges, we propose distance learning that adjusts the interval between normal and abnormal through 1) relative feature distance rather than individual values such as magnitude or score. This adjusts the relative distance between the hard-negative nor-mal sample and the abnormal sample based on the intra-class variance of normal samples. In addition, 2) Class-Activate Feature Learning (CLA V) is proposed with an add-on Implicit Class-Activate (ICA) module to implicitly activate representative features from a single branch for each class with Class-Specific (CS) loss function as an aux-iliary task to explore each normal or abnormal pattern. Fur-thermore, for the first time in WV AD, we address the impor-tance of the relationship between static and dynamic infor-mation for WV AD and propose 3) a Context-Motion In-terrelation Module (CoMo) that has a dynamic path and a context path focusing on motion and appearance, respec-tively, in the scene, for modeling the relationship between these two information. Then, each feature is projected from the temporal space to the interaction space and correlate propagation is performed by the graph convolution module. As shown in Fig. 1, (a) the CLA V feature enlarged the gap by (b) distance learning and explored relational information through (c) CoMo, and has no false alarm in (d) the basket-ball game scene with physical fighting, and shows accurate temporal localization in (e) the abnormal scene with fight-ing. We evaluate and discuss the effectiveness of the pro-posed method on four weak-labeled benchmarks, including large-scale real-world dataset UCF-Crimes [31] and XD-Violence [38], and it showed SOTA results.
Chang_Depth_Estimation_From_Indoor_Panoramas_With_Neural_Scene_Representation_CVPR_2023
Abstract Depth estimation from indoor panoramas is challenging due to the equirectangular distortions of panoramas and inaccurate matching. In this paper, we propose a prac-tical framework to improve the accuracy and efficiency of depth estimation from multi-view indoor panoramic images with the Neural Radiance Field technology. Specifically, we develop two networks to implicitly learn the Signed Distance Function for depth measurements and the radi-ance field from panoramas. We also introduce a novel spherical position embedding scheme to achieve high ac-curacy. For better convergence, we propose an initializa-tion method for the network weights based on the Manhat-tan World Assumption. Furthermore, we devise a geomet-ric consistency loss, leveraging the surface normal, to fur-ther refine the depth estimation. The experimental results demonstrate that our proposed method outperforms state-of-the-art works by a large margin in both quantitative and qualitative evaluations. Our source code is available at https://github.com/WJ-Chang-42/IndoorPanoDepth.
1. Introduction Panoramic imaging has emerged as an attractive imag-ing technique in many fields, such as computer visionand robotics. Different from traditional imaging devices, panoramic cameras capture a holistic scene and present it as a 2D image with equirectangular projection. Indoor panora-mas, captured in the interior scenes by panoramic cameras, have been widely used in interior design and decoration. Recovering depth information aligned with RGB panoramic images benefits a line of down-streaming applications, such as augmented reality and indoor mapping. Recent works on depth estimation from panoramas em-ploy Convolutional Neural Network (CNN) structures with prior knowledge learned from depth labels and achieve ex-cellent performance. Most of these works adopt a sin-gle panoramic image to predict the relative depth map [7, 23, 29, 31, 37, 39]. These methods require lots of RGB and depth pairs while training and encounter the problem of domain adaptation in practice. There are a few works attempting to employ multiview panoramic images in the depth estimation task [32, 38]. They recover depth infor-mation by finding the correspondence of different views. However, strict vertical or horizontal position relations are required for input images in these methods. Panoramas show great distortions when presented as 2D images. Prior works adopt various technologies to over-come this problem, such as processing panoramas [7,26,27, †Corresponding Author This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 899 31] with perspective projection and developing special con-volution kernels [8, 30, 37]. Recently, the Neural Radiance Field (NeRF) [18] based on volume rendering has attracted great attention, which aims to synthesize novel views and recover the geometry of a complex scene. It considers im-age pixels as the rendering results of camera rays casting to the scene and learns geometric information from the corre-spondence among each ray, which eliminates affects from distortions while processing panoramic images. However, when applied to panoramas, the state-of-the-art scene rep-resentation methods still require a number of input images and take a long time to converge. It is a compelling research problem to explore how to leverage the omnidirectional in-formation in panoramas to achieve satisfying depth estima-tion results with fewer images and faster convergence. To exploit the holistic spatial information in panoramas, we propose a framework to achieve holistic depth estima-tion with a few panoramic images. Our framework consists of two main networks with a novel positional embedding scheme for learning a better representation from panoramas. The geometry network estimates the Signed Distance Func-tion (SDF) to represent the 3D information of the scene, and the color network reconstructs the color texture. With the assistance of the rendering equation, the expected color of a pixel in an image is rendered with radiance values of the sampled 3D coordinates along camera rays. Both net-works are optimized by minimizing the difference between the rendered colors. Inspired by [2], we propose a method to initialize the parameters of the geometry network based on the assumption that floors and ceilings are always ver-tical to the gravity direction in indoor panoramic images, which provides guidance to properly optimize the geome-try network. Experimental results show that the proposed initialization scheme facilitates the network converge faster and achieves better results. In addition, considering that the geometric information from the depth is supposed to be con-sistent with the geometry from the surface normal, we de-vise the geometric consistency loss, which further refines the depth measurements. Moreover, we construct a syn-thetic dataset that provides RGB-D image pairs from var-ious positions. We evaluate our method on our synthetic dataset and two real-world datasets. The experimental re-sults demonstrate that our method achieves superior perfor-mance among state-of-the-art approaches. Even with fewer image views and a short training period, our method works well and outputs promising depth measurements. Our con-tributions are summarized as follows: • We propose an unsupervised method for depth estima-tion from multi-view indoor panoramic images by uti-lizing a neural network with a specially designed po-sitional embedding scheme to implicitly learn the SDF of the scene represented by panoramas. • Inspired by the Manhattan World Assumption, we pro-pose an initialization method for the network weights for better convergence. • We devise a loss item based on geometric consistency that the geometric information from depth is supposed to be consistent with the surface norm. • We release a synthetic panoramic RGB-D dataset ren-dered from photorealistic indoor scenes. Experimen-tal results on our synthetic dataset and two realis-tic datasets demonstrate that our proposed method achieves superior performance in both quantitative and qualitative ways.
Cai_MARLIN_Masked_Autoencoder_for_Facial_Video_Representation_LearnINg_CVPR_2023
Abstract This paper proposes a self-supervised approach to learn universal facial representations from videos, that can trans-fer across a variety of facial analysis tasks such as Facial Attribute Recognition (FAR), Facial Expression Recognition (FER), DeepFake Detection (DFD), and Lip Synchroniza-tion (LS). Our proposed framework, named MARLIN , is a facial video masked autoencoder, that learns highly robust and generic facial embeddings from abundantly available non-annotated web crawled facial videos. As a challenging auxiliary task, MARLIN reconstructs the spatio-temporal details of the face from the densely masked facial regions which mainly include eyes, nose, mouth, lips, and skin to capture local and global aspects that in turn help in encod-ing generic and transferable features. Through a variety of experiments on diverse downstream tasks, we demonstrate MARLIN to be an excellent facial video encoder as well as feature extractor, that performs consistently well across a variety of downstream tasks including FAR (1.13% gain over supervised benchmark), FER (2.64% gain over unsu-pervised benchmark), DFD (1.86% gain over unsupervised benchmark), LS (29.36% gain for Frechet Inception Dis-tance), and even in low data regime. Our code and models are available at https://github.com/ControlNet/MARLIN.
1. Introduction Facial analysis tasks [34, 43, 70, 85] provide essential cues for human non-verbal behavior analysis, and help un-fold meaningful insights regarding social interaction [36], communication [40], cognition [68] with potential appli-cations in Human-Computer Interaction (HCI) and Affec-tive Computing domains. Recently, we have witnessed sig-nificant progress in deep neural network models to solve facial analysis tasks such as Facial Attribute Recognition (FAR) [34, 85], Facial Expression Recognition (FER) [48], DeepFake Detection (DFD) [70], and Lip Synchronization (LS) [43]. While these deep models can achieve remark-L a r g e U n l a b e l l e d F a c i a l V i d e o D a t a s e t M A R L I N E n c o d e r F e a t u r e sF a c i a l A t t r i b u t e R e c o g n i t i o n F a c i a l E x p r e s s i o n R e c o g n i t i o n D e e p F a k e D e t e c t i o n L i p S y n c h r o n i z a t i o n O t h e r d o w n s t r e a m t a s k s . . .P r e t r a i n e d V i d e oD o w n s t r e a m A d a p t a t i o nFigure 1. Overview of the proposed Masked Autoencoder for fa-cial Representation LearnINg aka MARLIN. MARLIN aims to learn a universal facial representation from abundantly available non-annotated facial video data. able performance, they often require large-scale annotated datasets, which is not only a resource-expensive and time-consuming process but also infeasible for some applications requiring domain expertise for annotation (e.g. FER). To this end, self-supervised pre-training [26, 37, 71] has lately emerged as an effective strategy to address the lim-itations of fully supervised methods, as it enables generic representation learning from non-annotated data, that can then be transferred across tasks having limited labels. For images of natural scenes and objects, self-supervised learning approaches using self-distillation [14], contrastive-learning [18, 19], solving pre-text tasks such as jigsaw puz-zle [53], and more recently autoencoding [37,71] have even outperformed the supervised learning approaches. Despite the promises offered by these self-supervised methods in learning scalable and generic representations for natural scene images and videos, these have not yet been investigated for learning representations from facial video data. Facial representation learning requires track-ing of fine-grained face specific details which might not be perfectly captured by linear tube masking [71]. Un-til now, most of the existing approaches associated with facial analysis tasks are highly specialized and develop This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1493 task-specific models trained in a fully supervised manner [46, 54, 63], with very few recent efforts towards learning generic image-based facial encoding [10,84]. These closely related works [10, 84] either focus on exploring training dataset properties in terms of size and quality [10] or per-forming pre-training in visual-linguistic way [84]. These works [10, 84] are hard to scale since they use static image-level facial information and the image-caption pairs are highly associated with context information rather than face. In this paper, our goal is to learn universal andtask-agnostic representations in a self-supervised manner for face-related downstream tasks (see Fig. 1). For this pur-pose, we employ a masked autoencoder [37, 71] with a facial-guided masking strategy that learns to reconstruct spatio-temporal details of a face from densely masked fa-cial regions using non-annotated videos. Unlike existing approaches for natural scene videos [71], where the tube-masking is initialized with a static part of the video without any semantic information, our approach dynamically tracks face and then develops a facial part-guided tube mask-ing strategy using an off-the-shelf face parser i.e. FaceX-Zoo [75]. Thus, we pose a more challenging task that en-courages the model to learn spatio-temporal representations to cover local as well as global information. Inspired by prior works [27, 60] showing high-quality reconstruction results along with rich and generic latent features, we in-corporate adversarial loss on top of masked encoding to enhance reconstruction quality. Our experimental results show that our proposed framework, MARLIN, learns highly generic facial encoding that scale and transfers well across diverse facial analysis tasks such as FER, DFD, FAR, and LS and achieve favorable performance gain w.r.t. state-of-the-art benchmarks. In summary, our main contributions are: • We propose, MARLIN, a universal and task-agnostic facial encoder that learns robust and transferable facial representation from abundantly available non-annotated web-crawled facial videos in a self-supervised fashion. • As a challenging auxiliary task, we propose to reconstruct the spatio-temporal details of the face from the densely masked facial regions. The proposed facial region-guided tube masking (aka Fasking ) strategy aims to learn local and global aspects from facial videos which in turn help encode generic and transferable features. • Through extensive quantitative and qualitative analysis, we show that MARLIN learns rich, generic, transferable, and robust facial representation, that performs consis-tently well across a variety of downstream tasks includ-ing FAR (1.13% gain over supervised benchmark), FER (2.64% gain over unsupervised benchmark), DFD (1.86% gain over unsupervised benchmark), LS (29.36% gain for Frechet Inception Distance) and even in few shot settings.Table 1. Facial Analysis Tasks. Overview of different face related tasks and relevant datasets down the lane. Datasets # Samples Env. Fmt. Task Year LFW [39] 13,233 Wild Img. Identification 2008 VGG-FACE [54] 2.6M Wild Img. Identification 2015 CelebA [50] 202,599 Wild Img. Attributes 2015 YouTubeFace [78] 3,425 Wild Vid Identification 2011 LRS2 [22] 144,482 Wild Vid Lip Sync. 2017 CelebV [79] 5 Wild Vid Reenact 2018 CMU-MOSEI [83] 23,453 Wild Vid Emo, Senti 2018 FaceForensics++ [62] 1,004 Wild Vid DeepFake 2019 V oxCeleb2 [23] 150,480 Wild Vid Speaker 2018 CelebV-HQ [85] 55,666 Wild Vid Attribute 2022
Dong_The_Enemy_of_My_Enemy_Is_My_Friend_Exploring_Inverse_CVPR_2023
Abstract Although current deep learning techniques have yielded superior performance on various computer vision tasks, yet they are still vulnerable to adversarial examples. Adversar-ial training and its variants have been shown to be the most effective approaches to defend against adversarial exam-ples. A particular class of these methods regularize the dif-ference between output probabilities for an adversarial and its corresponding natural example. However, it may have a negative impact if a natural example is misclassified. To circumvent this issue, we propose a novel adversarial train-ing scheme that encourages the model to produce similar output probabilities for an adversarial example and its “in-verse adversarial” counterpart. Particularly, the counter-part is generated by maximizing the likelihood in the neigh-borhood of the natural example. Extensive experiments on various vision datasets and architectures demonstrate that our training method achieves state-of-the-art robustness as well as natural accuracy among robust models. Further-more, using a universal version of inverse adversarial ex-amples, we improve the performance of single-step adver-sarial training techniques at a low computational cost.
1. Introduction Deep learning has achieved revolutionary progress in nu-merous computer vision tasks [24, 40, 55] and has emerged as a promising technique for fundamental research in mul-tiple disciplines [31, 35, 52]. However, a well-established study has demonstrated that Deep Neural Networks (DNNs) are extremely vulnerable to adversarial examples [42], which are indistinguishable from natural examples in hu-man vision. In other words, a visually undetectable per-turbation to the original example can lead to a significant *Corresponding Author Accuracy Top 50% Bottom 50% Attack Strength 1.0 0.8 0.6 0.4 0.2 0.0 -8 -6 -4 -2 0 2 4 6 8 (a) Natural Training Accuracy Top 50% Bottom 50% Attack Strength 1.0 0.8 0.6 0.4 0.2 0.0 -8 -6 -4 -2 0 2 4 6 8 (b) Adversarial Training [25] Figure 1. Average accuracy under different attack strengths for two networks trained on natural and adversarial samples. We rank test examples based on the cross-entropy loss value in increasing order and divide them into two equal halves. Note that the negative ϵdenotes the strength of inverse adversarial perturbation. (a) Nat-urally trained models are extremely susceptible to perturbations. (b) For adversarially trained models, the adversarial effect is ex-acerbated on examples that are more possibly to be misclassified. The green line corresponds to natural examples. disruption of the inference result of DNNs. The impercep-tibility of these tailored examples also makes them easy to bypass manual verification [3, 15], posing a potential secu-rity threat to the safety of deep learning-based applications. Various defense methods have been proposed to improve adversarial robustness of DNNs [21,46,48]. As the primary defense method, adversarial training [10, 25, 42] improves intrinsic network robustness via adaptively augmenting ad-versarial examples into training examples. State-of-the-art adversarial training methods mainly focus on the distribu-tion alignment between natural and adversarial examples to preserve the consistency of the DNN prediction [7, 44, 53]. However, there still exists an undesirable decrease in the standard accuracy for adversarially trained models due to limited data and restricted model capacity. The misclassifi-cation of natural examples can further undermine the distri-bution alignment during adversarial training. The natural intuition is that: adversarial examples corre-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24678 sponding to misclassified natural examples are more likely to be misclassified. In other words, adversarial examples exhibit higher loss values compared to their corresponding natural examples. Contrary to adversaries that are harm-ful to DNNs, we introduce inverse adversarial examples1 that are created via minimizing the objective function as an inverse procedure of adversary generation. Specifically, in-verse adversarial examples are beneficial to DNNs, which can be more possibly to be correctly classified. To sup-port this claim, we study the accuracy of trained classifi-cation models on two groups of samples (see Figure 1). We present the accuracy of adversarial examples and their in-verse counterparts under different attack strengths. Even a small adversarial perturbation can induce a drastic accu-racy decrease for the naturally trained model. For the ad-versarially trained model, the robust accuracy of examples with higher loss values (Bottom 50%) suffers from a heavier drop than that of examples with lower loss values (Top 50%) under larger attack strengths. This indicates that the adver-sarial counterparts of low-confidence or even misclassified examples are also misclassified. Therefore, the distribution alignment [7, 44, 53] between two misclassified examples might have an unnecessary or even harmful effect on the adversarial robustness establishment. In this paper, to mitigate the unnecessary or even harm-ful matching manner between misclassified examples, we propose a novel adversarial training framework based on an inverse version of adversarial examples, dubbed Inverse Ad-versarial Training (IAT), which implicitly bridges the dis-tribution gap between adversarial examples and the high-likelihood region of their belonging classes. Adversarial ex-amples of a certain category can thus be pulled closer to the high-likelihood region instead of their original examples. Specifically, we propose an inverse procedure of the stan-dard adversary generation to reach the high-likelihood re-gion. The generated inverse adversaries can also be viewed as the rectification of original examples for reducing pre-diction errors. Considering the multi-class decision surface and computational cost, we further design a class-specific inverse adversary generation paradigm as opposed to the instance-wise version. Furthermore, we establish a momen-tum mechanism for the prediction of inverse adversaries to stabilize the training process. A one-off version of our in-verse adversarial training is also proposed for improving time efficiency. Comprehensive experiments demonstrate the superiority of our method in comparison with state-of-the-art adversar-ial training approaches. We also show that our method can be adapted to larger models with extra generated data for robustness enhancement. Besides, the robustness of single-step adversarial training methods can be further improved at a low cost by incorporating our method. 1The formal definition will be given in the following sections.The main contribution of this paper can be summarized as follows: • By analyzing the unnecessary, or even harmful, align-ment between misclassified examples, we propose a novel adversarial training framework based on the in-verse version of adversarial examples, which promotes the aggregation of adversarial examples to the high-likelihood region of their belonging classes. • Based on our Inverse Adversarial Training (IAT) paradigm, we design a class-specific universal inverse adversary generation method to mitigate the individ-ual bias of different examples with high efficiency. We also propose a one-off strategy to reduce compu-tational costs with a negligible performance loss. • Extensive experiments demonstrate the effectiveness and generalizability of our method compared to state-of-the-art adversarial training methods. Our method can also be combined with single-step adversarial training methods as a plug-and-play component for boosting robustness at a low cost. Related works. The lethal vulnerabilities of deep neural networks against adversarial examples have been witnessed in [4, 10, 28, 42]. A myriad of attempts have been made to defend against these tailored examples, including adversar-ial training [17,25,44,53], adversarial detection [14,43], and input transformation-based methods [37, 48, 49]. Among them, adversarial training consistently remains to be the most effective method [2] to improve intrinsic network ro-bustness via augmenting the training data with adversarial examples. In addition, most existing works generally in-corporate a regularization term to narrow the distribution difference between natural examples and their adversarial counterparts [7, 44, 53], which has been demonstrated to be beneficial for robustness enhancement. This matching man-ner seems natural but might be misguided by misclassified natural examples, as we showed in Figure 1. Several efforts have been devoted to resolving such an issue by assigning weights on losses in terms of the intensity of adversarial examples [23, 54]. However, they mainly concentrate on mitigating the imbalance of disturbance effect among ad-versarial examples, while our primary focus is to alleviate the harmful alignment between misclassified examples by incorporating inverse adversarial examples. Inverse adversarial examples were first formally de-scribed in [36], where Salman et al. studied them in vi-sion systems to enhance in-distribution performance against new corruptions. In comparison, we investigate the rectifi-cation effect of inverse adversarial examples on the distri-bution alignment during adversarial training for robustness enhancement. A concurrent work [22] also exploits the in-verse version of adversarial examples for adversarial robust-ness by incorporating different distance metrics. However, 24679 we built on class-specific universal inverse adversaries for adversarial training with more efficiency and robustness. Furthermore, we show how our method can be combined with single-step adversarial training techniques to improve both the natural performance and robustness.
Chen_Boundary_Unlearning_Rapid_Forgetting_of_Deep_Networks_via_Shifting_the_CVPR_2023
Abstract The practical needs of the “right to be forgotten” and poisoned data removal call for efficient machine unlearn-ing techniques, which enable machine learning models to unlearn, or to forget a fraction of training data and its lin-eage. Recent studies on machine unlearning for deep neural networks (DNNs) attempt to destroy the influence of the for-getting data by scrubbing the model parameters. However, it is prohibitively expensive due to the large dimension of the parameter space. In this paper, we refocus our attention from the parameter space to the decision space of the DNN model, and propose Boundary Unlearning, a rapid yet ef-fective way to unlearn an entire class from a trained DNN model. The key idea is to shift the decision boundary of the original DNN model to imitate the decision behavior of the model retrained from scratch. We develop two novel bound-ary shift methods, namely Boundary Shrink and Boundary Expanding, both of which can rapidly achieve the utility and privacy guarantees. We extensively evaluate Boundary Un-learning on CIFAR-10 and Vggface2 datasets, and the re-sults show that Boundary Unlearning can effectively forget the forgetting class on image classification and face recog-nition tasks, with an expected speed-up of 17and19, respectively, compared with retraining from the scratch.
1. Introduction Suppose a company trains a face recognition model with your photos and deploys it as an opened API. Your photos could be stolen or inferenced by attackers via model inver-sion attack [6,18]. With the increasing awareness of protect-ing user’s privacy, a lot of privacy regulations take effect to This work was supported in part by the National Natural Science Foundation of China under Grants 62272183 and 62171189; by the Key R&D Program of Hubei Province under Grant 2021BAA026; and by the special fund for Wuhan Yellow Crane Talents (Excellent Young Scholar). The corresponding author of this paper is Chen Wang.provide you the control over your personal data. For exam-ples, the General Data Protect Regulation (GDPR) estab-lished by the European Union gives individuals “ the right to be forgotten ” and mandates that companies have to erase personal data once it is requested [35]. Beyond the “right to be forgotten”, data forgetting from machine learning (ML) models is also beneficial when cer-tain training data becomes no longer valid, e.g., some train-ing data is manipulated by data poisoning attacks [10, 26], or outdated over time, or even identified to be mistakes after training. These practical needs call for efficient machine un-learning techniques, which enable ML models to unlearn, or to forget a fraction of training data and its lineage. In this paper, we focus on unlearning an entire class from deep neural networks (DNNs), which is useful in realis-tic scenarios like face recognition: unlearning one’s data needs to forget the entire class of one’s face images. As the DNN model retrained from scratch is the optimal un-learned model, early studies try to accelerate the retrain-ing process of deep networks [1, 11, 38], but have to inter-vene the original training process, which degenerates the model utility and increases the training time. A branch of recent researches [8, 9, 23, 27] attempt to destroy the influ-ence of the forgetting data by scrubbing the model param-eters. For example, the Fisher Information Matrix (FIM) is used to locate the influence of forgetting data at the param-eter space [8, 9]. However, it is prohibitively expensive due to the large dimension of the parameter space. In order to find an efficient unlearning approach to forget an entire class, we visualize the decision space of the re-trained DNN model and discover two key observations (c.f. Figure 1). First, the forgetting samples spread around the decision space of the retrained DNN model, indicating that the decision boundary of the forgetting samples has been broken. Second, most of the forgetting samples move to the border of other clusters; this helps us recall the closest-to-boundary criterion [24] that samples at the border of cluster in the decision space will probably be predicted with huge uncertainty. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7766 Figure 1. Key observations from the decision space of the re-trained DNN model. The solid dots in different colors represent the remaining samples belonging to different classes and the hol-low circles in different colors stand for the forgetting samples pre-dicted as corresponding classes. It can be observed that (1) the forgetting samples spread around the feature space of the retrained DNN model, and (2) most of the forgetting samples move to the borders of other clusters. These two observations naturally match the two critical goals of machine unlearning: utility and privacy guarantees. Utility guarantee ensures that the unlearned model should generalize badly on the forgetting data while the prediction performance on the remaining data is maintained. Privacy guarantee means that the unlearned model should not leak any information of the forgetting data. Based on our key ob-servations, the utility guarantee can be achieved by only de-stroying the boundary of the forgetting class but maintain-ing the boundary of the remain classes, while the privacy guarantee can be accomplished by pushing the forgetting data to the border of other clusters. In light of the above ideas, we refocus our attention from the parameter space to the decision space of the DNN model1, and propose Boundary Unlearning , a rapid yet ef-fective way to unlearn the forgetting class from a trained DNN model. Boundary Unlearning tries to shift the de-cision boundary of the original DNN model to imitate the decision behavior of the retrained model. To achieve the critical goals, we further introduce two novel boundary shift methods: Boundary Shrink andBoundary Expanding . The former breaks the decision boundary of the forgetting class by splitting the forgetting feature space into other classes, while the latter disperses the activation about the forgetting class by remapping and pruning an extra shadow class as-signed to the forgetting data. 1Previous unlearning approaches try to destroy the information of the forgetting data by locating the influential parameters directly, while we find that unlearning can be accomplished by manipulating the parameters with the guidance of the decision behaviors of the retrained model.We summarize our major contributions as follows: • We propose Boundary Unlearning, the first work to un-learn an entire class from a trained DNN model by shifting the decision boundary. Compared with ex-isting studies, Boundary Unlearning neither costs too much computational resource nor intervenes the origi-nal training pipeline. • We propose two novel methods, namely, Boundary Shrink and Boundary Expanding, to shift the decision boundary of the forgetting class. Both methods can rapidly achieve the utility and privacy guarantees with only a few epochs of boundary adjusting. • We conduct extensive experiments to evaluate Bound-ary Unlearning on image classification and face recog-nition tasks. The results show that Boundary Unlearn-ing can rapidly and effectively forget the forgetting class, and outperforms four state-of-the-art techniques. The code has been released for reproducibility2.
Hui_Bridging_Search_Region_Interaction_With_Template_for_RGB-T_Tracking_CVPR_2023
Abstract RGB-T tracking aims to leverage the mutual enhance-ment and complement ability of RGB and TIR modalities for improving the tracking process in various scenarios, where cross-modal interaction is the key component. Some previ-ous methods concatenate the RGB and TIR search region features directly to perform a coarse interaction process with redundant background noises introduced. Many other methods sample candidate boxes from search frames and conduct various fusion approaches on isolated pairs of RGB and TIR boxes, which limits the cross-modal interaction within local regions and brings about inadequate context modeling. To alleviate these limitations, we propose a novel Template-Bridged Search region Interaction (TBSI) module which exploits templates as the medium to bridge the cross-modal interaction between RGB and TIR search regions by gathering and distributing target-relevant object and envi-ronment contexts. Original templates are also updated with enriched multimodal contexts from the template medium. Our TBSI module is inserted into a ViT backbone for joint feature extraction, search-template matching, and cross-modal interaction. Extensive experiments on three popu-lar RGB-T tracking benchmarks demonstrate our method achieves new state-of-the-art performances. Code is avail-able at https://github.com/RyanHTR/TBSI .
1. Introduction Given the initial state of a single target object in the first frame, the goal of single object tracking (SOT) is to local-ize the target object in successive frames. As a fundamen-tal task in the computer vision community, SOT has drawn the great attention of researchers. However, current SOT methods built on only visible light (RGB) data become vul-nerable under extreme imaging conditions ( e.g., low illumi-*Corresponding author RGB Search FrameTIR Search Frame(a) RGB RoIsTIR RoIs(b)TemplateTIR Search TokensRGB Search Tokens(c) CNN CNNConcat CNNCNNFusionViTViTTokensBridging Figure 1. Comparison between our cross-modal interaction ap-proach and previous ones. (a) Features of RGB and TIR search frames are directly concatenated. (b) Candidate boxes (RoIs) are sampled from RGB and TIR search frames and fused in pairs with gating or attention mechanisms. (c) Our approach exploits tem-plate tokens as the medium to bridge the cross-modal interaction between RGB and TIR search region tokens. nation and adverse weather, etc), which motivates the in-corporation of thermal infrared (TIR or T) data for mutual enhancement and complement. Benefiting from the strong nocturnal photosensitivity and penetration ability of thermal infrared data, RGB-T tracking enjoys wide potential appli-cations such as video surveillance processing [1], intelligent robotics [5], and autonomous driving [8]. As a multimodal vision task, the key to RGB-T tracking is how to perform effective cross-modal interaction. Since the tracking process occurs in successive frames guided by the annotated initial frame, cross-modal interaction be-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13630 tween search frames of RGB and TIR modalities becomes the main focus. As illustrated in Figure 1 (a), some pre-vious methods [16, 44] directly concatenate features of the whole RGB and TIR search frames from the encoders of strong base trackers [4, 40]. This simple manner tends to introduce redundant background noise information, making cross-modal interaction too coarse and hence harming the model’s discriminative ability. In addition, there are many other methods [14,27,28,37,39,49] which sample candidate boxes (RoIs) from the Gaussian distribution in the search frames and conduct various fusion operators based on at-tention, gating mechanism, or dataset attributes, etc, to fuse each pair of RoI features of RGB and TIR modalities as shown in Figure 1 (b). Then, fused RoI features are sep-arately fed into a binary classifier to distinguish the target object. However, each pair of RoIs merely crops a small portion of local features from the search frames, contain-ing limited foreground and background information. Thus, cross-modal interaction between each isolated pair of RoIs may bring about inadequate modeling of the global envi-ronment context in the search frame and restrict the mutual enhancement and complement effect of the two modalities. Given the above discussion, we argue that direct cross-modal interaction between RGB and TIR search frames or candidate RoIs still has limitations in comprehensively leveraging complementary multimodal clues to facilitate the tracking process. Therefore, we propose a novel scheme which exploits the target templates as the medium to bridge the cross-modal interaction between RGB and TIR search regions , as illustrated in Figure 1 (c). The major superior-ity motivating our advocate of this scheme is that the tem-plates contain original multimodal information of the target object, which can serve as strong guidance to extract target-relevant object and environment contexts from search re-gions for adaptive and precise information enhancement and complement. The background noises of other distrac-tors in search regions can also be reduced by template bridg-ing during the cross-modal interaction process. In order to implement the above scheme, we design a Template-Bridged Search region Interaction (TBSI) mod-ule. Concretely, our TBSI module first fuses features of RGB and TIR templates to obtain the multimodal context medium. Since the cross-attention mechanism [36] is an effective and widely-adopted practice for context aggrega-tion, our TBSI also utilizes it with the fused template as query and TIR search region feature as key and value to gather target-relevant TIR context information into the tem-plate medium. Then, the RGB search region feature serves as query and the fused template serves as key and value to distribute target-relevant TIR context from the medium to the RGB search region. Similarly, target-relevant RGB context is also gathered and distributed to the TIR search region through the template medium in a reverse direction.Finally, comprehensive multimodal information aggregated in the fused template is transferred back to the original RGB and TIR templates to update them with the enriched multi-modal contexts gathered from search regions. In addition, most existing RGB-T tracking methods [14, 27,28,37,39,49] employ MDNet [32] with VGG-M [34] as the base tracker, whose number of classification branches equals the number of training sequences, which largely lim-its their capacity and scalability. Inspired by the powerful ability of Vision Transformer (ViT) [12] to capture long-range dependencies and its recent success on SOT [7, 24, 42], we also extend ViT to RGB-T tracking for joint fea-ture extraction, search-template matching, and cross-modal interaction. Our TBSI module is inserted into the ViT base tracker to bridge the intra-modal information flow within the Transformer layers for effective RGB-T tracking. Our contributions are summarized as follows: (1) We propose a novel Template-Bridged Search region Interac-tion (TBSI) module which exploits the fused target tem-plate as the medium to bridge the cross-modal interaction between RGB and TIR search regions and update original templates as well, forming adaptive and precise information enhancement. (2) We extend the ViT architecture with the proposed TBSI module to RGB-T tracking for joint feature extraction, search-template matching, and cross-modal in-teraction, which has not been explored by previous methods to our best knowledge. (3) Extensive experiments demon-strate that our method achieves new state-of-the-art perfor-mances on three popular RGB-T tracking benchmarks.
Girdhar_ImageBind_One_Embedding_Space_To_Bind_Them_All_CVPR_2023
Abstract We present IMAGE BIND, an approach to learn a joint embedding across six different modalities -images, text, au-dio, depth, thermal, and IMU data. We show that all combi-nations of paired data are not necessary to train such a joint embedding, and only image-paired data is sufficient to bind the modalities together. IMAGE BIND can leverage recent large scale vision-language models, and extends their zero-shot capabilities to new modalities just by using their natu-ral pairing with images. It enables novel emergent applica-tions ‘out-of-the-box’ including cross-modal retrieval, com-posing modalities with arithmetic, cross-modal detection and generation. The emergent capabilities improve with the strength of the image encoder and we set a new state-of-the-art on emergent zero-shot recognition tasks across modal-ities, outperforming specialist supervised models. Finally, we show strong few-shot recognition results outperforming prior work, and that IMAGE BINDserves as a new way to evaluate vision models for visual and non-visual tasks. ∗Equal technical contribution.1. Introduction A single image can bind together many experiences – an image of a beach can remind us of the sound of waves, the texture of the sand, a breeze, or even inspire a poem. This ‘binding’ property of images offers many sources of super-vision to learn visual features, by aligning them with any of the sensory experiences associated with images. Ideally, for a single joint embedding space, visual features should be learned by aligning to all of these sensors. However, this requires acquiring all types and combinations of paired data with the same set of images, which is infeasible. Recently, many methods learn image features aligned with text [1, 30, 45, 59, 63, 80, 81], audio [3, 4, 49, 54, 55, 68] etc. These methods use a single pair of modali-ties or, at best, a few visual modalities. However, the fi-nal embeddings are limited to the pairs of modalities used for training. Thus, video-audio embeddings cannot directly be used for image-text tasks and vice versa. A major ob-stacle in learning a true joint embedding is the absence of large quantities of multimodal data where all modalities are present together. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15180 In this paper, we present IMAGE BIND, which learns a single shared representation space by leveraging multiple types of image-paired data. It does not need datasets where all modalities co-occur with each other. Instead, we lever-age the binding property of images and we show that just aligning each modality’s embedding to image embeddings leads to an emergent alignment across all of the modalities. In practice, I MAGE BINDleverages web-scale (image, text) paired data and combines it with naturally occurring paired data such as (video, audio), (image, depth) etc. to learn a single joint embedding space. This allows I MAGE BIND to implicitly align the text embeddings to other modalities such as audio, depth etc., enabling zero-shot recognition ca-pabilities on that modality without explicit semantic or tex-tual pairing. Moreover, we show that it can be initialized with large-scale vision-language models such as CLIP [59], thereby leveraging the rich image and text representations of these models. Thus, I MAGE BIND can be applied to a variety of different modalities and tasks with little training. We use large-scale image-text paired data along with nat-urally paired ‘self-supervised’ data across four new modal-ities -audio, depth, thermal, and Inertial Measurement Unit (IMU) readings – and show strong emergent zero-shot clas-sification and retrieval performance on tasks for each of these modalities. These emergent properties improve as the underlying image representation is made stronger. On au-dio classification and retrieval benchmarks, I MAGE BIND’s emergent zero-shot classification matches or outperforms specialist models trained with direct audio-text supervision on benchmarks like ESC, Clotho, AudioCaps. I MAGE BIND representations also outperform specialist supervised mod-els on few-shot evaluation benchmarks. Finally, we show that I MAGE BIND’s joint embeddings can be used for a wide variety of compositional tasks as illustrated in Figure 1, in-cluding cross-modal retrieval, combining embeddings via arithmetic, detecting audio sources in images, and generat-ing images given audio input. 2. Related Work IMAGE BIND builds upon several advances in vision-language, multimodal, and self-supervised research. Language Image Pre-training. Training images jointly with linguistic signals like words or sentences has been shown to be an effective method for zero-shot, open-vocabulary recognition and text to image retrieval [13, 17, 37, 66]. Language as supervision can further be used for learning strong video representations [2, 46, 47]. Joulin et al. [33] show that using large-scale image dataset with noisy captions yields strong visual features. Recently, CLIP [59], ALIGN [30] and Florence [81] collect large collections of image and text pairs and train models to embed image and language inputs in a joint space using contrastive learning, exhibiting impressive zero-shot performance. CoCa [80]adds an image captioning objective on top of the contrastive loss for improved performance. Flamingo [1] handles arbi-trarily interleaved images and texts, and achieves state of the art on many few-shot learning benchmarks. LiT [82] adopts contrastive training for fine-tuning and observes freezing image encoders works the best. This prior line of works mostly considers image and text, while our work enables zero-shot recognition on multiple modalities. Multi-Modal Learning. Our work binds multiple modal-ity representations in a joint embedding space. Prior works explored joint training of multiple modalities in a super-vised [20, 41] or self-supervised contexts [3, 19, 49, 68, 72]. The success of image and language pre-training methods such as CLIP has inspired approaches that revisits learn-ing deep semantic representations through matching other modalities with linguistic inputs. Various methods adapt CLIP to extract semantically strong video representations [14, 42, 44, 77]. Most related to our method, Nagrani et al. [50] create a weakly-labeled dataset for paired video-audio and captions that allows for training multi-modal video-audio encoder to match textual features resulting in strong audio and video retrieval and captioning perfor-mance. AudioCLIP [26] adds audio as an additional modal-ity into a CLIP framework, enabling zero-shot audio classi-fication. In contrast, I MAGE BINDdoes not require explicit paired data between all modalities and instead leverages im-age as a natural weak supervision for unifying modalities. Feature Alignment Pre-trained CLIP models have been utilized as teachers to supervise other models due to the strength of its visual representations [43, 57, 73]. More-over, CLIP joint image and text embedding space has also been leveraged for a variety of zero-shot tasks like de-tection [23, 86], segmentation [40], mesh animation [79] etc. showing the power of joint embedding spaces. Point-CLIP [83] finds a pre-trained CLIP encoder can be used for 3D recognition by projecting a point cloud to a number of 2D depth map views, which in turn are encoded using CLIP visual encoder. In multilingual neural machine translation, a similar phenomenon to the emergence behavior of I M-AGEBINDis commonly observed and utilized: if languages are trained in the same latent space through learned implicit bridging, translation can be done between language pairs on which no paired data is provided [32, 39]. 3. Method Our goal is to learn a single joint embedding space for all modalities by using images to bind them together. We align each modality’s embedding to image embeddings, such as text to image using web data and IMU to video using video data captured from egocentric cameras with IMU. We show that the resulting embedding space has a powerful emer-gent zero-shot behavior that automatically associates pairs of modalities without seeing any training data for that spe-15181 Web Image-Text Thermal Data Depth Sensor Data IMAGEBIND Images Videos TextAudio Depth Thermal IMUNaturally AlignedEmergent AlignmentWeb Videos Egocentric Videos Figure 2. I MAGE BIND overview. Different modalities occur naturally aligned in different data sources, for instance images+text and video+audio in web data, depth or thermal information with images, IMU data in videos captured with egocentric cameras, etc. IMAGE -BINDlinks all these modalities in a common embedding space, enabling new emergent alignments and capabilities. cific pair. We illustrate our approach in Figure 2. 3.1. Preliminaries Aligning specific pairs of modalities. Contrastive learn-ing [27] is a general technique for learning an embedding space by using pairs of related examples (positives) and un-related examples (negatives). Using pairs of aligned ob-servations, contrastive learning can align pairs of modal-ities such as (image, text) [59], (audio, text) [26], (image, depth) [68], (video, audio) [49] etc. However, in each case, the joint embeddings are trained and evaluated using the same pairs of modalities. Thus, (video, audio) embeddin
gs are not directly applicable for text-based tasks while (image, text) embeddings cannot be applied for audio tasks. Zero-shot image classification using text prompts. CLIP [59] popularized a ‘zero-shot’ classification task based on an aligned (image, text) embedding space. This involves constructing a list of text descriptions that describe the classes in a dataset. An input image is classified based on its similarity to the text descriptions in the embedding space. Unlocking such zero-shot classification for other modalities requires specifically training using paired text data, e.g., (audio, text) [26] or (point-clouds, text) [83]. In contrast, I MAGE BIND unlocks zero-shot classification for modalities without paired text data. 3.2. Binding modalities with images IMAGE BIND uses pairs of modalities ( I,M), where I represents images and Mis another modality, to learn a sin-gle joint embedding. We use large-scale web datasets with (image, text) pairings that span a wide range of semantic concepts. Additionally, we use the natural, self-supervised pairing of other modalities – audio, depth, thermal, and In-tertial Measurement Unit (IMU) – with images. Consider the pair of modalities ( I,M) with aligned ob-servations. Given an image Iiand its corresponding obser-vation in the other modality Mi, we encode them into nor-malized embeddings: qi=f(Ii)andki=g(Mi)where f, gare deep networks. The embeddings and the encodersare optimized using an InfoNCE [53] loss: LI,M=−logexp(q⊺ iki/τ) exp(q⊺ iki/τ) +P j̸=iexp(q⊺ ikj/τ),(1) where τis a scalar temperature that controls the smoothness of the softmax distribution and jdenotes unrelated observa-tions, also called ‘negatives’. We follow [74] and consider every example j̸=iin the mini-batch to be a negative. The loss makes the embeddings qiandkicloser in the joint em-bedding space, and thus aligns IandM. In practice, we use a symmetric loss LI,M+LM,I. Emergent alignment of unseen pairs of modalities. IM-AGEBINDuses modalities paired with images, i.e., pairs of the form ( I,M) to align each the embeddings from each modality Mto those from images. We observe an emer-gent behavior in the embedding space that aligns two pairs of modalities (M1,M2)even though we only train using the pairs (I,M1)and(I,M2). This behavior allows us to perform a wide variety of zero-shot and cross-modal re-trieval tasks without training for them. We achieve state-of-the-art zero-shot text-audio classification results without observing a single sample of paired (audio, text). 3.3. Implementation Details IMAGE BIND is conceptually simple and can be imple-mented in many different ways. We deliberately choose a vanilla implementation that is flexible and allows for an ef-fective study and easy adoption. In § 5, we present design decisions that are critical for good emergent ‘binding’. Encoding modalities. We use a Transformer architec-ture [71] for all the modality encoders. We use the Vision Transformer (ViT) [12] for images. Following [19], we use the same encoder for images and videos. We temporally inflate [7] the patch projection layer of the ViT and use 2 frame video clips sampled from 2seconds. We follow [21] for encoding audio and convert a 2second audio sampled at 16kHz into spectrograms using 128mel-spectrogram bins. As the spectrogram is also a 2D signal like an image, we use a ViT with a patch size of 16and stride 10. We treat ther-mal images and depth images as one-channel images and 15182 Dataset Task #cls Metric #test Audioset Audio-only (AS-A) [18] Audio cls. 527 mAP 19048 ESC 5-folds (ESC) [58] Audio cls. 50 Acc 400 Clotho (Clotho) [16] Retrieval -Recall 1045 AudioCaps (AudioCaps) [36] Retrieval -Recall 796 VGGSound (VGGS) [8] Audio cls. 309 Acc 14073 SUN Depth-only (SUN-D) [67] Scene cls. 19 Acc 4660 NYU-v2 Depth-only (NYU-D) [64] Scene cls. 10 Acc 653 LLVIP (LLVIP) [31] Person cls. 2 Acc 15809 Ego4D (Ego4D) [22] Scenario cls. 108 Acc 68865 Table 1. Emergent zero-shot classification datasets for audio, depth, thermal, and Inertial Measurement Unit (IMU) modalities. We evaluate I MAGE BINDwithout training for any of these tasks and without training on paired text data for these modalities. For each dataset, we report the task (classification or retrieval), number of classes (#cls), metric for evaluation (Accuracy or mean Average Precision), and the number of test samples (#test). also use a ViT to encode them. We follow [20] to convert depth into disparity maps for scale invariance. We extract the IMU signal consisting of accelerometer and gyroscope measurements across the X,Y, and Zaxes. We use 5sec-ond clips resulting in 2K time step IMU readings which are projected using a 1D convolution with a kernel size of 8. The resulting sequence is encoded using a Transformer. Fi-nally, we follow the text encoder design from CLIP [59]. We use separate encoders for images, text, audio, ther-mal images, depth images, and IMU. We add a modality-specific linear projection head on each encoder to obtain a fixed size ddimensional embedding, that is normalized and used in the InfoNCE loss from Eq 1. In addition to ease of learning, this setup allows us to also initialize a subset of the encoders using pretrained models, e.g., the image and text encoder using CLIP [59] or OpenCLIP [29]. 4. Experiments We first describe the main experimental setup and pro-vide full details in the supplement. Naturally paired modalities and datasets. We use I M-AGEBIND on six modalities -image/video, text, audio, depth, thermal images, and IMU. As described in § 3.3, we treat videos as 2 frame images and process them the same as images. For the naturally available paired data, we use the (video, audio) pairs from the Audioset dataset [18], (im-age, depth) pairs from the SUN RGB-D dataset [67], (im-age, thermal) pairs from the LLVIP dataset [31] and (video, IMU) pairs from the Ego4D dataset [22]. For these pairs of modalities, we do not use any extra supervision like class la-bels, text etc. Since SUN RGB-D and LLVIP are relatively small, we follow [20] and replicate them 50 ×for training. Large scale image-text pairs. We leverage image-text su-pervision from large-scale web data [59]. For ease of ex-perimentation, we use pretrained models that are trained on billions of (image, text) pairs. Specifically, we use thepretrained vision (ViT-H 630M params) and text encoders (302M params) from OpenCLIP [29] in our experiments. Encoders for each modality. We convert audio into 2D mel-spectrograms [21], and thermal and depth modalities into 1 channel images and use ViT-B, ViT-S encoders re-spectively. The image and text encoders are kept frozen during the I MAGE BINDtraining and the audio, depth, ther-mal, and IMU encoders are updated. Emergent zero-shot vs. zero-shot. Methods such as CLIP [59], AudioCLIP [26] etc. train with modality pairs, (image, text) and (audio, text), to demonstrate zero-shot classification using text-prompts for the same modality. In contrast, I MAGE BINDbinds modalities together using only image-paired data. Thus, just by training on (image, text) and (image, audio), I MAGE BIND can perform zero-shot classification of audio using text prompts. As we do not directly train for this ability, we term it emergent zero-shot classification to distinguish it from methods that specifically train using paired text-supervision for all modalities. Evaluation on downstream tasks. We comprehensively evaluate I MAGE BIND on a many different downstream tasks using different protocols. We summarize the main datasets used for evaluation in Table 1. 4.1. Emergent zero-shot classification We evaluate I MAGE BINDon emergent zero-shot classi-fication and use the text prompt templates from [59] (full details in Appendix B). We report the results in Table 2. Each task measures I MAGE BIND’s ability to associate text embeddings to the other modalities without observing them together during training. Given the novelty of our problem setting, there are no “fair” baselines to compare I MAGE -BIND with. Nevertheless, we compare to prior work that uses text paired with certain modalities ( e.g. audio [26, 50]), and for certain “visual-like” modalities such as depth and thermal, we use the CLIP model directly. We also report the best reported supervised upper bound per benchmark. IMAGE BIND achieves a high emergent zero-shot clas-sification performance. On each benchmark, I MAGE BIND achieves strong gains and even compares favorably to super-vised specialist models trained for the specific modality and task. These results demonstrate that I MAGE BINDaligns the modalities and implicitly transfers the text supervision as-sociated with images to other modalities like audio. In par-ticular, I MAGE BINDshows strong alignment for non-visual modalities like audio and IMU suggesting that their natu-rally available pairing with images is a powerful source of supervision. For completeness, we also report the standard zero-shot image (ImageNet [62] -IN1K, Places-365 [85] -P365) and video (Kinetics400 [34] -K400, MSR-VTT 1k-A [76] -MSR-VTT) tasks. As the image & text encoders are initialized (and frozen) using OpenCLIP, these results match those of OpenCLIP. 15183 IN1K P365 K400 MSR-VTT NYU-D SUN-D AS-A VGGS ESC LLVIP Ego4D Random 0.1 0.27 0.25 0.1 10.0 5.26 0.62 0.32 2.75 50.0 0.9 IMAGE BIND 77.7 45.4 50.0 36.1 54.0 35.1 17.6 27.8 66.9 63.4 25.0 Text Paired ----41.9∗25.4∗28.4†[26] -68.6†[26] --Absolute SOTA 91.0 [80] 60.7 [6
Cai_Orthogonal_Annotation_Benefits_Barely-Supervised_Medical_Image_Segmentation_CVPR_2023
Abstract Recent trends in semi-supervised learning have signifi-cantly boosted the performance of 3D semi-supervised med-ical image segmentation. Compared with 2D images, 3D medical volumes involve information from different direc-tions, e.g., transverse, sagittal, and coronal planes, so as to naturally provide complementary views. These com-plementary views and the intrinsic similarity among ad-jacent 3D slices inspire us to develop a novel annotation way and its corresponding semi-supervised model for effec-tive segmentation. Specifically, we firstly propose the or-thogonal annotation by only labeling two orthogonal slices in a labeled volume, which significantly relieves the bur-den of annotation. Then, we perform registration to ob-tain the initial pseudo labels for sparsely labeled volumes. Subsequently, by introducing unlabeled volumes, we pro-pose a dual-network paradigm named Dense-Sparse Co-training (DeSCO) that exploits dense pseudo labels in early stage and sparse labels in later stage and meanwhile forces consistent output of two networks. Experimental results on three benchmark datasets validated our effectiveness in performance and efficiency in annotation. For example, with only 10 annotated slices, our method reaches a Dice up to 86.93% on KiTS19 dataset. Our code and models are available at https://github.com/HengCai-NJU/DeSCO .
1. Introduction Medical image segmentation is one of the most critical vision tasks in medical image analysis field. Thanks to the development of deep learning-based methods [8,11,28,32], segmentation performance has now been substantially im-proved. However, the current promising performance is at *Corresponding author: Yinghuan Shi. Heng Cai, Shumeng Li, Yinghuan Shi and Yang Gao are with the State Key Laboratory for Novel Software Technology and National Institute of Healthcare Data Science, Nanjing University, China. This work is supported by the NSFC Program (62222604, 62206052, 62192783), CAAI-Huawei Mind-Spore (CAAIXSJLJJ-2021-042A), China Postdoctoral Science Founda-tion Project (2021M690609), Jiangsu Natural Science Foundation Project (BK20210224), and CCF-Lenovo Bule Ocean Research Fund. Transverse plane Coronal plane Orthogonal annotation Annotation (#slice)Dice (%) 5590 80 75 70 65 6085 Orthogonal annotation Sparse annotation Dense annotation95Figure 1. The upper figure illustrates our annotation method, each volume with annotations is labeled with only two orthogonal slices. The lower figure shows the comparison between the effi-ciency and effectiveness of our orthogonal annotation and other manners, including conventional dense annotation and previous sparse annotation which labels slices in one plane. All trained on LA [42] dataset with supervised setting. For sparse annotation and our orthogonal annotation, we train the models only on labeled voxels through partial cross-entropy and partial Dice loss. the cost of large-scale manually precisely labeled dataset, which is prohibitively expensive and laborious to achieve. What’s worse, different radiologists might provide different annotations even for a same image. Therefore, exploring ways to alleviate the requirement of quantity or quality of manual annotation is highly demanded. Mainstream meth-ods typically follow two paradigms: 1) degrade annotation quality, i.e., weakly-supervised segmentation, and 2) reduce annotation quantity, i.e., semi-supervised segmentation. Weakly-supervised segmentation methods usually utilize weak annotations, e.g., image-level label [16, 17], scrib-ble [20, 21], point [3] or partial slices [5, 18]. Unfor-tunately, most of them are either difficult to distinguish some fuzzy boundaries or with additional large computa-tional burden [15]. What’s more, weakly-supervised setting usually requires coarse annotation for every single image. This is still a heavy burden for radiologists. Besides, most current methods originally developed for 2D segmentation could not directly utilize 3D volumetric information [9]. Different from these weakly-supervised methods, semi-supervised methods train segmentation models with a small amount of manually labeled data and a large amount of unlabeled data, which have achieved remarkable perfor-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3302 mance with an impressive deduction on demand for anno-tation [6, 19]. Despite their success, we notice most current semi-supervised segmentation methods still require full 3D annotation for each labeled volume. In fact, segmentation targets in adjacent slices of 3D volume are highly similar in both appearance and location, leading it redundant to la-bel every slice. Although the sparse annotation is discussed in recent work [18], we notice these conventional methods still neglect the complementary views between different di-rections in 3D volume. It is known that 3D medical volumes naturally contains different directions ( e.g., transverse, coronal planes) which provide complementary information from different views. And recent trends in semi-supervised learning [7, 40] have revealed that learning from complementary view is indeed beneficial. Thus, we wonder whether a novel annotation method coupled with its corresponding model could be in-vestigated by introducing this complementary relation into 3D semi-supervised medical image segmentation . In this paper, for labeled volume, we innovatively inves-tigate a novel sparse annotation way— orthogonal annota-tion,i.e., only to label two slices in its orthogonal direction (e.g., transverse and coronal direction in Figure 1). We be-lieve our annotation way has two merits: 1) it could largely force the model to learn from complementary views with two diversely initialized labeled slices, 2) it helps greatly re-duce the label costs with fully utilizing the inter-slice simi-larity. Following very recent work [18], we name the setting as Barely-supervised Segmentation. To incorporate our orthogonal annotation, the most in-tuitive thought about training strategy of a segmentation model is that only the voxels on the labeled slices contribute to the training. However, directly learning from this sparse annotation is unstable and the training is apt to collapse (shown in Sec. 4). Thus, we apply registration to spread supervision signals from slice to volume, where the result of label propagation can serve as the dense pseudo label for training. By performing registration, we obtain two sets of pseudo labels for volumes from orthogonal directions. Yet, the obtained pseudo labels are not promising enough to directly train a segmentation model using current exist-ing semi-supervised methods, which is mainly due to the accumulation of error in the registration process. Therefore, to leverage 1) the volumes with inaccurate pseudo labels and 2) the rest unlabeled volumes, we propose a simple yet effective end-to-end framework namely Dense-Sparse Co-training (DeSCO), which consists two segmen-tation models of a same structure. At the beginning of training, the models mainly learn from dense pseudo labels with a learning preference on voxels with more confident pseudo labels, i.e., voxels near to registration source slice, and exploit unlabeled volumes through cross-supervision. After the models have been improved through training, wegradually get rid of pseudo label until the supervised loss solely comes from sparse annotation. Meanwhile, the role of cross-supervision is gradually emphasized correspond-ingly. Because in the process of reaching consensus through cross-supervision, the mistake introduced by previous train-ing on inaccurate pseudo labels could be revised. Overall, our contributions are three folds: • A new annotation way that only labels two orthogonal slices for a labeled 3D volume, which greatly reduces the annotation burden. • A novel barely-supervised 3D medical image segmen-tation framework to steadily utilize our high-efficient sparse annotation with coupled segmentation method. • A dense-sparse co-training paradigm to learn from dense pseudo label and sparse label while leveraging unlabeled volumes to reduce noise by reaching con-sensus through cross-supervision. Extensive experiments on three public datasets validate that our barely-supervised method is close to or even bet-ter than its upper bound, i.e., semi-supervised methods with fully annotated labeled volumes. For example, on KiTS19, compared to Mean Teacher [36] that uses 320 labeled slices with a Dice of 84.98%, we only uses 10 labeled slices yet obtains a Dice of 86.93%.
Guo_Knowledge_Distillation_for_6D_Pose_Estimation_by_Aligning_Distributions_of_CVPR_2023
Abstract Knowledge distillation facilitates the training of a com-pact student network by using a deep teacher one. While this has achieved great success in many tasks, it remains completely unstudied for image-based 6D object pose esti-mation. In this work, we introduce the first knowledge dis-tillation method driven by the 6D pose estimation task. To this end, we observe that most modern 6D pose estimation frameworks output local predictions, such as sparse 2D key-points or dense representations, and that the compact stu-dent network typically struggles to predict such local quan-tities precisely. Therefore, instead of imposing prediction-to-prediction supervision from the teacher to the student, we propose to distill the teacher’s distribution of local pre-dictions into the student network, facilitating its training. Our experiments on several benchmarks show that our dis-tillation method yields state-of-the-art results with different compact student models and for both keypoint-based and dense prediction-based architectures.
1. Introduction Estimating the 3D position and 3D orientation, a.k.a. 6D pose, of an object relative to the camera from a single 2D image has a longstanding history in computer vision, with many real-world applications, such as robotics, autonomous navigation, and virtual and augmented reality. Modern methods that tackle this task [7,20,21,25,28,33,40,45,47] all rely on deep neural networks. The vast majority of them draw their inspiration from the traditional approach, which consists of establishing correspondences between the object’s 3D model and the input image and compute the 6D pose from these correspondences using a Perspective-n-Point (PnP) algorithm [2, 23, 27, 42] or a learnable PnP network. Their main differences then lie in the way they extract correspondences. While some methods predict the 2D image locations of sparse 3D object keypoints, such as the 8 3D bounding box corners [19–21] or points on the ob-Ground-truthTeacherStudent (a) Student(c) Teacher(b) Our Distilled Student Figure 1. Student vs teacher keypoint predictions. The large backbone of the teacher allows it to produce accurate keypoints, indicated by tight clusters. By contrast, because of its more com-pact backbone, the student struggles to predict accurate keypoints when trained with keypoint-to-keypoint supervision. We therefore propose to align the student’s and teacher’s keypoint distributions . ject surface [33], others produce dense representations, such as 3D locations [7,45] or binary codes [40], from which the pose can be obtained. In any event, these methods rely on large models, which, while achieving impressive accuracy, are impractical de-ployment on embedded platforms and edge devices. As, to the best of our knowledge, no compact and efficient 6D pose estimation models have yet been proposed, a simple way to reduce the size of these networks consists of replacing their large backbones with much smaller ones. Unfortunately, this typically comes with a significant accuracy drop. In this paper, we address this by introducing a knowledge dis-tillation strategy for 6D pose estimation networks. Knowledge distillation aims to transfer information from a deep teacher network to a compact student one. The re-search on this topic has tackled diverse tasks, such as image classification [17, 37, 48], object detection [10, 11, 49] and semantic segmentation [14, 30]. While some techniques, such as feature distillation [15, 37, 48, 49], can in principle generalize to other tasks, no prior work has studied knowl-edge distillation in the context of 6D pose estimation. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18633 In this paper, we introduce a knowledge distillation method for 6D pose estimation motivated by the follow-ing observations. In essence, whether outputting sparse 2D locations or dense representations, the methods discussed above all produce multiple local predictions. We then argue that the main difference between the local predictions made by a deep teacher network and a compact student one con-sists in the accuracy of these individual predictions. Fig-ure 1 showcases this for sparse keypoint predictions, ev-idencing that predicting accurate keypoint locations with keypoint-to-keypoint supervision is much harder for the stu-dent than for the teacher. We therefore argue that knowledge distillation for 6D pose estimation should be performed not by matching the individual local predictions of the stu-dent and teacher but instead by encouraging the student and teacher distributions of local predictions to become similar. This leaves more flexibility to the student and thus facili-tates its training. To achieve this, we follow an Optimal Transport (OT) formalism [44], which lets us measure the distance between the two sets of local predictions. We express this as a loss function that can be minimized using a weight-based variant of Sinkhorn’s algorithm [6], which further allows us to ex-ploit predicted object segmentation scores in the distillation process. Our strategy is invariant to the order and the num-ber of local predictions, making it applicable to unbalanced teacher and student predictions that are not in one-to-one correspondence. We validate the effectiveness of our approach by conducting extensive experiments on the popular LINEMOD [16], Occluded-LINEMOD [3] and YCB-V [47] datasets with the SOTA keypoint-based approach WDRNet+. Our prediction distribution alignment strategy consistently outperforms both a prediction-to-prediction distillation baseline and the state-of-the-art feature distil-lation method [49] using diverse lightweight backbones and architecture variations. Interestingly, our approach is orthogonal to feature distillation, and we show that com-bining it with the state-of-the-art approach of [49] further boosts the performance of student network. To show the generality of our approach beyond keypoint prediction, we then apply it to the SOTA dense prediction-based method, ZebraPose [40], to align the distributions of dense binary code probabilities. Our experiments evidence that this outperforms training a compact ZebraPose in a standard prediction-to-prediction knowledge distillation fashion. Our main contributions can be summarized as follows. (i) We investigate for the first time knowledge distillation in the context of 6D pose estimation. (ii) We introduce an approach that aligns the teacher and student distribu-tions of local predictions together with their predicted ob-ject segmentation scores. (iii) Our method generalizes to both sparse keypoints and dense predictions 6D pose esti-mation frameworks. (iv) Our approach can be used in con-junction with feature distillation to further boost the stu-dent’s performance. Our code is available at https:// github.com/GUOShuxuan/kd-6d-pose-adlp .
Cao_Three_Guidelines_You_Should_Know_for_Universally_Slimmable_Self-Supervised_Learning_CVPR_2023
Abstract We propose universally slimmable self-supervised learn-ing (dubbed as US3L) to achieve better accuracy-efficiency trade-offs for deploying self-supervised models across dif-ferent devices. We observe that direct adaptation of self-supervised learning (SSL) to universally slimmable networks misbehaves as the training process frequently collapses. We then discover that temporal consistent guidance is the key to the success of SSL for universally slimmable networks, and we propose three guidelines for the loss design to ensure this temporal consistency from a unified gradient perspec-tive. Moreover, we propose dynamic sampling and group regularization strategies to simultaneously improve training efficiency and accuracy. Our US3L method has been empiri-cally validated on both convolutional neural networks and vision transformers. With only once training and one copy of weights, our method outperforms various state-of-the-art methods (individually trained or not) on benchmarks includ-ing recognition, object detection and instance segmentation.
1. Introduction Deep supervised learning has achieved great success in the last decade, but the drawback is that it relies heavily on a large set of annotated training data. Self-supervised learning (SSL) has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. Since the emergence of contrastive learning [7], SSL has clearly gained momentum and several recent works [8, 14] have achieved comparable or even better performance than the su-pervised pretraining when transferring to downstream tasks. However, it remains challenging to deploy trained models for edge computing purposes, due to the limited memory, computation and storage capabilities of such devices. *Corresponding author.Table 1. Comparisons between supervised classification and Sim-Siam under S-Net on CIFAR-100. The accuracy for SimSiam is under linear evaluation. ‘-’ denotes the model collapses. Type MethodAccuracy (%) 1.0x 0.75x 0.5x 0.25x SupervisedIndividual 73.8 72.8 71.4 67.3 S-Net [32] 71.9 71.7 70.8 66.2 S-Net+Distill [31] 73.1 71.9 70.5 67.2 SimSiam [9]Individual 65.2 64.0 60.6 51.2 S-Net [32] ----S-Net+Distill [31] 46.9 46.9 46.7 45.3 Ours 65.5 65.3 63.2 59.7 To facilitate deployment, several model compression tech-niques have been proposed, including lightweight architec-ture design [29], knowledge distillation [20], network prun-ing [15], and quantization [33]. Among them, structured net-work pruning [25] is directly supported and accelerated by most current hardware and therefore the most studied. How-ever, most structured pruning methods require fine-tuning to obtain a sub-network with a specific sparsity, and a single trained model cannot achieve instant and adaptive accuracy-efficiency trade-offs across different devices. To address this problem in the context of supervised learning, the family of slimmable networks (S-Net) and universally slimmable networks (US-Net) [2, 22, 31, 32] were proposed, which can switch freely among different widths by training only once. Driven by the success of slimmable networks, a ques-tion arises: Can we train a self-supervised model that can run at arbitrary width? A na ¨ıve solution is to replace the supervised loss with self-supervised loss based on the US-Net framework. However, we find that this solution doesn’t work directly after empirical studies. Table 1 shows that the phenomenon in self-supervised scenarios is very different. The model directly collapses after applying the popular SSL method SimSiam [9] to slimmable networks [32]. Although using inplace distillation [31] for sub-networks prevents the model from collapsing, there is still a big gap between the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15742 results of S-Net+Distill and training each model individually for SimSiam. So why is the situation so different in SSL and how to further improve the performance (i.e., close the gap)? In this paper, we present a unified perspective to ex-plain the differences and propose corresponding measures to bridge the gap. From a unified gradient perspective, we find that the key is that the guidance to sub-networks should be consistent between iterations, and we analyze which com-ponents of SSL incur the temporal inconsistency problem and why US-Net works in supervised learning. Based on these theoretical analyses, we propose three guidelines for the loss design of US-Net training to ensure temporal consis-tency. As long as one of them is satisfied, US-Net can work well, no matter in supervised or self-supervised scenarios. Moreover, considering the characteristics of SSL and the deficiencies of US-Net, we propose dynamic sampling and group regularization to reduce the training overhead while improving accuracy. Our main contributions are: •We discover significant differences between supervised and self-supervised learning when training US-Net. Based on these observations, we analyze and summarize three guidelines for the loss design of US-Net to ensure temporal consistency from a unified gradient perspective. •We propose a dynamic sampling strategy to reduce the train-ing cost without sacrificing accuracy, which eases coping with the large data volumes in SSL. •We analyze how the training scheme of US-Net limits the model capacity and propose group regularization as a solu-tion by giving different freedoms to different channels. •We validate the effectiveness of our method on both CNNs and Vision Transformers (ViTs). Our method requires only once training and a single model, which can exceed the re-sults of training each model individually, and is comparable to knowledge distillation from pretrained teachers.
Fan_PointListNet_Deep_Learning_on_3D_Point_Lists_CVPR_2023
Abstract Deep neural networks on regular 1D lists ( e.g., natural languages) and irregular 3D sets ( e.g., point clouds) have made tremendous achievements. The key to natural lan-guage processing is to model words and their regular or-der dependency in texts. For point cloud understanding, the challenge is to understand the geometry via irregular point coordinates, in which point-feeding orders do not matter. However, there are a few kinds of data that exhibit both reg-ular 1D list and irregular 3D set structures, such as proteins and non-coding RNAs. In this paper, we refer to them as 3D point lists and propose a Transformer-style PointListNet to model them. First, PointListNet employs non-parametric distance-based attention because we find sometimes it is the distance, instead of the feature or type, that mainly deter-mines how much two points, e.g., amino acids, are corre-lated in the micro world. Second, different from the vanilla Transformer that directly performs a simple linear transfor-mation on inputs to generate values and does not explicitly model relative relations, our PointListNet integrates the 1D order and 3D Euclidean displacements into values. We con-duct experiments on protein fold classification and enzyme reaction classification. Experimental results show the effec-tiveness of the proposed PointListNet.
1. Introduction The essence of deep learning is to capture the structure of a certain kind of data via artificial neural networks. Usu-ally, an element of data includes a position part and a feature part. According to the type of element position, data exhibit different structures. Various deep neural networks are pro-posed to model those structures and have made tremendous achievements. For example, texts are 1D lists of words. As shown in Fig. 1(a). The position of a word is its order in the text and the feature is the word itself. To capture the structure of texts or the dependency of words, 1D convolutional neu-ral networks (CNNs) [3, 30, 58], recurrent neural networks (RNNs) [9, 26, 39] and Transformers [13, 49] are widely used. A digital image can be seen as a 2D rectangular grid or matrix of pixels, as shown in Fig. 1(b). Each pixel has a 2D position and is associated with a feature of color or other attributes. In this case, 2D CNNs are usually used to model image structure [23, 33, 46]. Recently, Transformers are also employed for image understanding [15]. Recently, 3D point cloud/set processing is attracting This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17692 more and more attention from the deep learning community. Different from texts or images, in which the orders of words or the positions of pixels are regular (words or pixels are dis-tributed uniformly in texts or images), the 3D coordinates of points are irregular (points are distributed unevenly in 3D Euclidean space), as shown in Fig. 1(c). To capture the irregular structure of point clouds, deep neural networks, such as multilayer perceptrons (MLPs) [42, 43, 45], convo-lutions [48, 56] and Transformers [22, 62], need to not only effectively exploit 3D coordinates for geometry understand-ing but also be invariant to permutations of the input set in point-feeding order. Besides regular 1D lists of words, 2D grids of pixels and irregular 3D point sets, data may exhibit hybrid struc-tures. For example, proteins are made up of amino acids. As shown in Fig. 1(d), those amino acids are linked by pep-tide bonds and form a chain. Therefore, proteins include a 1D list data structure. Because amino acids are arranged uniformly in the chains, the list structure is regular. In ad-dition to the 1D sequential order in the peptide chain, each amino acid is with a 3D coordinate, which specifies its spa-tial position in the protein. Those 3D coordinates describe a geometry structure. Similar to point clouds, the geome-try structure of proteins exhibits irregularity. Therefore, the data structure of proteins involves a regular 1D list and an irregular 3D set. In this paper, we refer to this data struc-ture as 3D point list. Point lists also exist in other polymers, such as non-coding RNAs. Because the function of proteins or non-coding RNAs is based on their structures, modeling 3D point lists can facilitate a mechanistic understanding of their function to life. In this paper, we propose a Transformer-style network, named PointListNet, to capture the structure of 3D point lists. First, different from the vanilla Transformer [15, 49], which calculates self-attention by performing compu-tationally expensive matrix multiplication on inputs, our PointListNet employs a simple non-parametric distance-based attention mechanism because we find sometimes it is mainly the distance, instead of the feature or type, that determines how much two elements, e.g., amino acids, are correlated in the micro world. Second, because structures are relative, which is independent of the absolute sequential order or the absolute Euclidean coordinate, our PointList-Net integrates the 1D order and 3D Euclidean displace-ments into values. This is substantially different from the vanilla Transformer that directly performs a simple linear transformation on absolute positional embeddings and input features to generate values, which does not explicitly model relative distance or direction. To evaluate PointListNet, we conduct experiments on protein fold classification and en-zyme reaction classification and achieve new state-of-the-art accuracy. The contributions of this paper are fivefold: • Among the early efforts, we investigate a range ofpoint cloud methods for protein modeling. • We propose a Transformer-style network, i.e., PointListNet, for 3D point list modeling. • We replace self-attention with non-parametric distance-based attention, which is more efficient and effective to achieve the correlation among microparticles in some cases. • We integrate relative structure modeling into Trans-former and employ regular and irregular methods to capture the sequence and geometry structures, respec-tively. • We conduct extensive experiments on two protein tasks and the proposed method significantly outper-forms existing methods.
Choi_Balanced_Energy_Regularization_Loss_for_Out-of-Distribution_Detection_CVPR_2023
Abstract In the field of out-of-distribution (OOD) detection, a pre-vious method that use auxiliary data as OOD data has shown promising performance. However, the method pro-vides an equal loss to all auxiliary data to differentiate them from inliers. However, based on our observation, in various tasks, there is a general imbalance in the distribution of the auxiliary OOD data across classes. We propose a balanced energy regularization loss that is simple but generally ef-fective for a variety of tasks. Our balanced energy regular-ization loss utilizes class-wise different prior probabilities for auxiliary data to address the class imbalance in OOD data. The main concept is to regularize auxiliary samples from majority classes, more heavily than those from minor-ity classes. Our approach performs better for OOD detec-tion in semantic segmentation, long-tailed image classifica-tion, and image classification than the prior energy regular-ization loss. Furthermore, our approach achieves state-of-the-art performance in two tasks: OOD detection in seman-tic segmentation and long-tailed image classification.
1. Introduction Deep neural networks are used in a variety of fields such as image classification [22] and semantic segmenta-tion [11]. However, there is a challenge in the practical use of deep neural networks in areas where safety is crucial, such as autonomous driving and medical diagnosis [20,25]. In particular, deep neural networks have the issue of provid-ing high confidence to out-of-distribution (OOD) samples that are not used for training [15]. As a result, Maximum softmax probability (MSP) score has been proposed to iden-tify these OOD samples [17]. Based on the score, OOD de-tection performance is evaluated by metrics (e.g. AUROC, FPR). Both in image classification [18,24,26,29,30,38,40, 43, 46](including long-tailed image classification [43]) and semantic segmentation [1–3, 5, 10, 12, 16, 19, 28, 33, 36, 41], *Work done as an intern at RideFlux.different approaches have been suggested to enhance the OOD detection performance. Among them, we concentrate on the methods using auxiliary data as OOD data which in-dicate superior OOD detection performance to the previous methods that only use in-distribution samples. Outlier Exposure (OE) utilizes an auxiliary dataset of outliers to improve OOD detection performance [18]. The auxiliary data is consist of classes that do not overlap with the in-distribution data and the test OOD data. OE leverages the cross-entropy loss for the existing training data and the regularization loss for the auxiliary data. The cross-entropy loss that results from giving the auxiliary data a uniform label is the regularization loss of OE. Meanwhile, a new energy score has been introduced in Energy-based OOD detection (EnergyOE) which replaces the MSP score [29]. Furthermore, EnergyOE suggests an energy regularization loss that differs from that of OE to enhance performance. The squared hinge loss for energy with every existing (in-distribution) piece of data and every auxiliary (OOD) piece of data is added to create the energy regularization loss. Similarly, in semantic segmentation, the OOD detection performance is enhanced by using the auxiliary dataset of the outlier. Meta-OOD [5] organized the auxiliary dataset of the outlier by scenes of the COCO dataset [27]. Although the process of creating the auxiliary data is different from image classification, the training loss is comparable. Meta-OOD adopts the regularization loss proposed by OE. Re-cently, PEBAL [41] also adopts energy regularization loss proposed by EnergyOE. However, when regularizing auxiliary data, the existing methods for OOD detection do not take into account varia-tions between auxiliary data samples. The variations are se-vere especially on real data such as semantic segmentation for autonomous driving. As seen in Figure 1a, for the pre-trained model, the class distribution of the auxiliary OOD data is not uniform across classes, i.e., imbalanced. To ad-dress the imbalanced problem, we regularize the auxiliary data differently for each sample. To achieve this, we pro-pose a balanced energy regularization loss to apply higher regularization to majority classes than minority classes in This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15691 (a) Percentage (%)Cut-paste OOD Input Image OOD Detection (PEBAL) OOD Detection (Ours) Final Prediction (Ours)(a) (b) (b) Percentage (%)Cut-paste OOD Input Image OOD Detection (PEBAL) OOD Detection (Ours) Final Prediction (Ours)(a) (b) Initial Prediction Figure 1. Overview of our approach in semantic segmentation task (a): Class distribution of cut-pasted OOD pixels collected from 10000 synthesized scene images ; (b): OOD detection result in Fishyscapes validation sets. Our balanced energy PEBAL(Ours) is the method that substitutes the energy regularization loss in PEBAL [41] with our balanced energy regularization loss. auxiliary data. In other words, auxiliary samples of major-ity classes receive a larger energy constraint than samples of minority classes. We introduce the term Z, which indicates whether a sample belongs to the majority or minority of a class. Zis the weighted sum of the softmax output of the classification model for a sample (i.e., the posterior prob-ability of a class for a given sample), where the weight is the prior probability for the class. Unlike the existing en-ergy regularization loss, our balanced energy regularization loss adjusts to the value of Zfor an auxiliary data sample. Two adaptive loss components make up our loss: loss mar-gin and loss weight. The adaptive loss margin provides an additional Z-proportional margin in the squared hinge loss for auxiliary data. The adaptive loss weight gives a weight proportional to Zto the squared hinge loss. We confirm our novel loss in three tasks: semantic seg-mentation, long-tailed image classification, and image clas-sification. The proposed loss is simple but generally effec-tive for various tasks. Figure 1b illustrates how our method outperforms the previous state-of-the-art (SOTA) algorithm PEBAL in the semantic segmentation task by replacing the energy regularization loss with our loss. OOD detection per-formance is also enhanced when using our loss compared to the baseline (EnergyOE) which use only energy regular-ization loss. In all image classification tasks, we evaluate our method on semantically coherent OOD detection (SC-OOD) benchmark [46]. In long-tailed image classification task, our approach reveals superior OOD performance com-pared to both OE and EnergyOE methods which use auxil-iary data. In addition, our approach outperforms the pre-vious SOTA method PASCL [43], Similarly, in the image classification task, we demonstrate the superiority of our loss by outperforming both OE and EnergyOE, which make use of auxiliary data. The contributions are summarized as: • By making inferences based on previously trained models, we explain the imbalanced distribution of aux-iliary OOD data. • We suggest a novel balanced energy regularization loss to address the class imbalance in auxiliary OOD data. • The proposed balanced loss performs better for OOD detection than the previous energy regularization loss. • The SOTA performance for OOD detection in two tasks is achieved by our OOD detection method.
Deitke_Phone2Proc_Bringing_Robust_Robots_Into_Our_Chaotic_World_CVPR_2023
Abstract Training embodied agents in simulation has become mainstream for the embodied AI community. However, these agents often struggle when deployed in the physical world due to their inability to generalize to real-world envi-ronments. In this paper, we present Phone2Proc, a method that uses a 10-minute phone scan and conditional proce-dural generation to create a distribution of training scenes that are semantically similar to the target environment. The generated scenes are conditioned on the wall layout and arrangement of large objects from the scan, while also sampling lighting, clutter, surface textures, and instances of smaller objects with randomized placement and materi-als. Leveraging just a simple RGB camera, training with Phone2Proc shows massive improvements from 34.7% to 70.7% success rate in sim-to-real ObjectNav performance ∗Equal contribution.across a test suite of over 200 trials in diverse real-world environments, including homes, offices, and RoboTHOR. Furthermore, Phone2Proc’s diverse distribution of gener-ated scenes makes agents remarkably robust to changes in the real world, such as human movement, object rearrange-ment, lighting changes, or clutter.
1. Introduction The embodied AI research community has increasingly relied on visual simulators [ 30,49,61] to train embodied agents, with the expectation that the resulting policies can be transferred onto robots in the physical world. While agents trained within simulated environments have shown increased capabilities, progress in successfully deploying these policies onto physical robots has been limited. Robots trained in simulation must overcome daunting challenges if they are to work effectively in a real space such as our home. First, they must overcome the generalization This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9665 gap between the limited set of simulated environments they are trained on and the test scene of interest. In practice, poli-cies trained to perform complex visual tasks with reinforce-ment learning struggle to perform well in novel scenes with novel layouts and object instances. Second, they must work in realistic environments where we live and work, which are often full of clutter, with objects that keep being moved around, with people in and out of the scene and with light-ing changes. In short, we expect our agents to learn from a small set of training data points and generalize not just to a single test data point, but to a distribution of test data that is often semantically distant from the training data. Today’s methods are a ways away from delivering such performant, robust, and resilient robots [ 9,12]. In this work, we present P HONE 2PROC, which repre-sents a significant advancement towards the goal of creating performant, robust, and resilient robots. Instead of train-ing policies in simulated environments that may be seman-tically distant from the target physical scene, P HONE 2PROC efficiently generates a distribution of training environments that are semantically similar to the target environment. This significantly reduces the generalization gap between the training and target distributions, resulting in more capable robots. PHONE 2PROC utilizes a freely available mobile appli-cation to quickly scan a target environment and create a template of the surroundings, including the scene layout and 3D placements of large furniture. This template is then used to conditionally generate a fully interactive sim-ulated world using ProcTHOR [ 13], closely mirroring the real-world space. Importantly, this single simulated envi-ronment is then transformed into a distribution of simulated worlds by randomizing objects, their placements, materi-als, textures, scene lighting, and clutter. This allows for the creation of arbitrary large training datasets that are seman-tically similar to the desired real-world scene. We produce policies for object goal navigation using PHONE 2PROC and deploy them onto a LoCoBot robot in the physical world. We conduct extensive evaluations with 234 episodes in five diverse physical environments: a 3-room and 6-room apartment, a test scene from RoboTHOR-real, a conference room, and a cafeteria. This represents one of the largest and most diverse studies of sim-to-real indoor navigation agents to date. Across all environments, PHONE 2PROC significantly outperforms the state-of-the-art embodied AI model built with ProcTHOR, with an average improvement in success rate from 34.7% to 70.7%. Our robot is able to explore the scene efficiently and effectively navigate to objects of interest, even in the presence of clut-ter, lighting changes, shifts in furniture, and human move-ment. These strong navigation results are achieved using anRGB-only camera ,no depth sensors, no localization sensors, and no explicit mapping components.In summary, we present: (1) P HONE 2PROC, a simple and highly effective method for reducing the generalization gap between datasets of simulated environments and a tar-get environment in the real world, (2) large-scale real-world robotics experiments with 234 trials showing significant im-provements for P HONE 2PROC compared to state-of-the-art models, and (3) experiments demonstrating the robustness of P HONE 2PROC in the face of variations such as changes in lighting, clutter, and human presence.
Girase_Latency_Matters_Real-Time_Action_Forecasting_Transformer_CVPR_2023
Abstract We present RAFTformer, a real-time action forecasting transformer for latency-aware real-world action forecast-ing. RAFTformer is a two-stage fully transformer based architecture comprising of a video transformer backbone that operates on high resolution, short-range clips, and a head transformer encoder that temporally aggregates infor-mation from multiple short-range clips to span a long-term horizon. Additionally, we propose a novel self-supervised shuffled causal masking scheme as a model level augmen-tation to improve forecasting fidelity. Finally, we also pro-pose a novel real-time evaluation setting for action fore-casting that directly couples model inference latency to overall forecasting performance and brings forth a hith-erto overlooked trade-off between latency and action fore-casting performance. Our parsimonious network design fa-cilitates RAFTformer inference latency to be 9×smaller than prior works at the same forecasting accuracy. Ow-ing to its two-staged design, RAFTformer uses 94%less training compute and 90%lesser training parameters to outperform prior state-of-the-art baselines by 4.9points on EGTEA Gaze+ and by 1.4points on EPIC-Kitchens-100 validation set, as measured by Top-5 recall (T5R) in the offline setting. In the real-time setting, RAFTformer outperforms prior works by an even greater margin of upto 4.4T5R points on the EPIC-Kitchens-100 dataset. Project Webpage: https://karttikeya.github. io/publication/RAFTformer/ .
1. Introduction Latency matters. It is a crucial system design consid-eration for countless applications that operate in real-time from hardware design [65], network engineering [63], and satellite communications [30] to capital trading [32], human vision [59] and COVID transmission patterns [54]. How-ever, it has not been a center stage design consideration in modern computer vision systems of the past decade [11,45]. Modern vision system design has largely focused on the *Work done during Harshayu’s internship at HRI with Chiho Choi’s supervision who is now at Samsung Seminconductor US Karttikeya Mangalam is the corresponding author TimePresentTargetFutureForecasting Horizon Inference LatencyObserved PastLatencyObserved Past Prior MethodsRAFTformerObserved Pastdry handfold clothtake pizzatake pizzadry hand = 0Offline:Real-time: ≠ 0Action? secFigure 1. Action Forecasting is the task of predicting actions that will happen after a pre-determined time span, say tfseconds, into the future. Prior works consider an offline evaluation setting that ignores the model inference latency. We propose a latency-aware real-time evaluation setting where the model is required to finish forecasting tfseconds before the target time. We present RAFTformer, a fast action anticipation transformer that outper-forms prior works both in offline & real-time setting while fore-casting actions in real-time ( ≥25FPS). correctness of systems rather than the latency of the pre-dictions. While vision-based forecasting systems are often meant for embodied real-time deployment on autonomous agents like self-driving cars and robots, they are evaluated in an offline setting where inference latency is neglected (Figure 1). Interestingly, recent neural network architec-tures have adopted FLOPs as a proxy for latency as a sec-ondaxis for model design. While a sufficient fidelity met-ric for offline after-the-fact applications like automatic con-tent recognition, latency often comes second to correctness, even for real-time systems such as forecasting models. Forecasting empowers reactive planning [17]. An au-tonomous system present in rich human environments in-evitably needs to understand human actions around it for smooth task planning and execution. Autonomous agent planning critically depends on anticipating the future of the scene in various forms such as trajectory prediction [22, 23, 57, 58], action forecasting [19, 25, 80] or future scene This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18759 segmentation [8] and anticipating the future is an activity humans subconsciously do for day-to-day tasks [60]. And while vision-based forecasting systems are often meant for embodied real-time deployment on autonomous agents like autonomous cars and robots, they are evaluated in an offline setting where inference latency is neglected (Figure 1). In this work, we propose a real-time evaluation setting (Figure 1) that closely mimics the real-world deployment for a forecasting system. Suppose that in a real-time sys-tem, the design specifications require the forecasting system outputs tfseconds in advance of the event to be able to plan and use the forecasts effectively. In current offline settings, the forecasting system begins the inference tfseconds in advance of the event (‘Present’ in Figure 1) and the model latency is ignored (or assumed to be 0) such that the pre-dictions are available instantly. However, in our proposed real-time setting, the model is required to start inference in advance of ‘Present’ so that the outputs are available with a horizon of tfseconds, meeting the design specification. We observe that in the real-time setting, the prior works fare quite poorly because of their slow model inference la-tency (Table 3). A large latency implies that the model has to start inference further in the past and has to rely on older video data to make forecasts with the benefit of more ex-pressiveness (Figure 2). A smaller latency means the model can enjoy more recent video data but has limited capacity. Simply said, models that are only evaluated in the offline setting may fare poorly in the real-time deployment setting due to their latency agnostic design (Figure 2). We present, RAFTformer, a real-time action forecast-ing transformer that uses a two-stage transformer encoder-based network for lightning fast forecasts in inference. RAFTformer uses a shuffled casual masking scheme based feature prediction loss for learning strong temporal cues that transfer to feature prediction. Further, RAFTformer uses specialized anticipation tokens for learning to predict action at multiple temporal horizons that improve model reasoning capabilities of short-term action forecasting as well. Finally, the model is explicitly designed for real-time embodied de-ployments that allows inference up to an order of magnitude faster than prior state-of-the-art methods. In summary, our contributions are three-fold, First , we propose R eal-time A ction F orecasting Transformer (RAFTformer), a real-time action fore-casting transformer with latency at least 9×smaller than prior state-of-the-art action forecasting methods. RAFT-former uses specialized anticipation tokens and a novel shuffled casual masking-based self-supervision loss that allows it to outperform prior work while maintaining low latency with a reduction of 94% in GPU training time and 90% in the number of trainable parameters compares to prior works. To the best of our knowledge, our work is the first to achieve action anticipation in real-time ( i.e. 25 fps). 21 28 40 110 160 194 Latency (ms)17.518.519.320.5Top-5 Recall RAFTformer Offline RAFTformer Real-timeFigure 2. Evaluation Performance vs. Latency. Bigger models perform better in latency agnostic offline settings. In the real-time evaluation setting, we observe that, beyond a limit, bigger models with higher latency cause a drop in forecasting performance. In practical deployment, there exists a trade-off between latency and high-fidelity forecasts. See §4.3.1 for details. Second , we propose a latency-aware real-time evaluation setting (Figure 1) that better mimics practical deployment settings for embodied forecasting systems. Real-time eval-uation demonstrates a clear trade-off between inference la-tency and model forecasting fidelity, paving the path for the development of latency-aware forecasting models in the fu-ture (also see [20]). Third , Through extensive experiments, we show that RAFTformer outperforms prior state-of-the-art methods by 4.9points on the EGTEA Gaze+ dataset, by 1.4points on the EPIC-Kitchens-100 dataset according to the Top-5 Re-call metric and by a relative margin of 5.3%on the top-1 accuracy metric on EPIC-Kitchens-55 dataset.
Ashutosh_HierVL_Learning_Hierarchical_Video-Language_Embeddings_CVPR_2023
Abstract Video-language embeddings are a promising avenue for injecting semantics into visual representations, but exist-ing methods capture only short-term associations between seconds-long video clips and their accompanying text. We propose HierVL, a novel hierarchical video-language em-bedding that simultaneously accounts for both long-term and short-term associations. As training data, we take videos accompanied by timestamped text descriptions of human actions, together with a high-level text summary of the activity throughout the long video (as are available in Ego4D). We introduce a hierarchical contrastive train-ing objective that encourages text-visual alignment at both the clip level and video level. While the clip-level con-straints use the step-by-step descriptions to capture what is happening in that instant, the video-level constraints use the summary text to capture why it is happening, i.e., the broader context for the activity and the intent of the ac-tor. Our hierarchical scheme yields a clip representation that outperforms its single-level counterpart as well as a long-term video representation that achieves SotA results on tasks requiring long-term video modeling. HierVL success-fully transfers to multiple challenging downstream tasks (in EPIC-KITCHENS-100, Charades-Ego, HowTo100M) in both zero-shot and fine-tuned settings.
1. Introduction Understanding human activity in video is a fundamental vision problem with abundant applications in augmented re-ality, robotics, and information retrieval. The field has made exciting advances, from new models for recognition [24, 53, 86] and self-supervised representations [55, 58, 61, 90] to major datasets [16, 34, 63, 74, 106]. Nonetheless, activity understanding in video lags noticeably behind object under-standing in images, where today’s AI models compete well with people. One key reason for this discrepancy is the fact that whereas objects present themselves directly in the pixels— no subtext required—activity naturally has broad temporal Website: https://vision.cs.utexas.edu/projects/hiervl/ Summary : C made salad dressings with some oil and sauce4 minutes C opens the fridge Standard EmbeddingC places a bottle of vinegar on table Our Hierarchical EmbeddingC opens the tap Figure 1. Conventional video-language embeddings are trained to match short-term clips with their corresponding descriptions, e.g., open tap (in orange boxes), thus capturing what is happen-ing. Our hierarchical video-language embedding (in dotted blue box) learns both short-term and long-term visual-text relations, thereby capturing why is it happening (e.g., making salad dress-ing). Long-term intent is conveyed by textual summaries (blue) that give an abstractive summary of the whole video, and comple-ment the more literal step-by-step narrations (green). context rooted in the human actor’s (latent) intentions. Not only does an activity stretch across video frames, but also its interpretation relies on the larger context of what the person is trying to accomplish. Thus, there is a natural hierarchy of information in video, starting with the short-term “what the person is literally doing right now” (e.g., reaching for the stove) and going all the way to the long-term “what the person aims to do” (e.g., cook dinner). As a step towards capturing this hierarchy, we explore video-language representation learning. Video often has ac-companying timestamped text, whether from spoken nar-rations in a how-to video [63, 75, 106], closed caption text and scripts [9,76], or deliberate text annotations [16,34,91]. Existing video-language models learn a correspondence be-tween the two modalities by matching short video segments with their text counterpart, typically with a learned embed-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23066 ding [3, 55, 61, 90] that produces a language-enriched video clip encoder. However, this standard approach risks captur-ing only the short-term actions. Granular comments such as“now I pour milk in the pan” or“he picked up a wa-ter hose” fail to capture the overall goal of the activity, like making a coffee orcleaning a car . As a result, at inference time their encodings for unseen videos can be myopic and miss sequential dependencies between observed events. To tackle this problem, we introduce HierVL: a novel hierarchical video-language model that captures both short-term actions and long-term intents in video. Unlike standard video-language embeddings, our method aims to simulta-neously capture the immediate observed actions as well as their contribution to the longer-term goal. To that end, given training video accompanied by timestamped clip-level text descriptions as well as global (video-level) text summaries , HierVL learns a video-text embedding for hierarchical tem-poral understanding using two layers of contrastive learn-ing. The top (parent) layer encourages the aggregated video clips to be close to the overarching textual summary (e.g., he makes spaghetti dinner ), while the bottom (child) layer trains individual clips to be similar to their respective de-scriptions (e.g., he turns on the cooker ). See Fig. 1. To our knowledge, ours is the first work to create a hier-archical video-language embedding. Our idea to blend ab-stract textual summaries with literal text descriptions is new. Furthermore, our model design addresses constituent tech-nical challenges—namely, we circumvent the typical ex-pense of long-term feature learning [4, 43, 86] by using ag-gregation of short-term features, and we show how to jointly train with two levels of annotation in a way that staves off catastrophic forgetting of either layer. This hierarchical training yields not only global video-level representations that capture long-term information (e.g., intent and temporal dependencies), but also clip-level video features that are more expressive than those tradi-tionally learned via single-level schemes. This happens by means of our parent-child learning framework, which re-quires the aggregation of clip features within a video to match the long-term context captured by the summary. We demonstrate our model by training with the narra-tions and summaries in the 3,670-hour egocentric video dataset Ego4D [13, 34]. We show that HierVL outperforms strong baselines and state-of-the-art methods for multiple video benchmarks, successfully transferring its pretrained representation for inference on Charades-Ego [74], EPIC-KITCHENS [16], and HowTo100M [63].1We evaluate our representations on both hierarchy levels. In particu-lar, at the time of submission, HierVL achieves state-of-the-art performance on Ego4D Long Term Anticipation (LTA), Charades-Ego Action Recognition, EPIC-KITCHENS-100 1Note that we do not need any text or summary annotations for these downstream datasets and tasks.Multi-Instance Retrieval (zero-shot and fine-tuned settings), and HowTo100M Long Video Classification.
Gou_Rethinking_Image_Super_Resolution_From_Long-Tailed_Distribution_Learning_Perspective_CVPR_2023
Abstract Existing studies have empirically observed that the reso-lution of the low-frequency region is easier to enhance than that of the high-frequency one. Although plentiful works have been devoted to alleviating this problem, little under-standing is given to explain it. In this paper, we try to give a feasible answer from a machine learning perspective, i.e., the twin fitting problem caused by the long-tailed pixel dis-tribution in natural images. With this explanation, we refor-mulate image super resolution (SR) as a long-tailed distri-bution learning problem and solve it by bridging the gaps of the problem between in low-and high-level vision tasks. As a result, we design a long-tailed distribution learning so-lution, that rebalances the gradients from the pixels in the low-and high-frequency region, by introducing a static and a learnable structure prior. The learned SR model achieves better balance on the fitting of the low-and high-frequency region so that the overall performance is improved. In the experiments, we evaluate the solution on four CNN-and one Transformer-based SR models w.r.t. six datasets and three tasks, and experimental results demonstrate its superiority.
1. Introduction Image super resolution aims to restore a high-resolution (HR) image from a low-resolution (LR) one, which is an important technique in image processing [13,26,27,52] and computer vision [7,14,18,45,51]. In the past decades, plen-tiful SR methods have been proposed [19, 53], and applied to a wide range of real-world applications [21, 47, 49, 54]. Among existing studies, the learning-based methods that learn a mapping between LR and HR image spaces have achieved the state-of-the-art performance [17,39,43,58,59]. Nonetheless, they have empirically observed that the high-frequency regions are harder to be super-resolved than the Corresponding author ~20% pixels with >0.1~17% pixels with >0.1~4% pixels with >0.1 Figure 1. The long-tailed pixel distribution in the natural image. For given a HR image IHR, we take4LR versionILRas a showcase, and utilize Bicubic Interpolation (BI) and MSRRes-Net [25] (MSRRN) to super-resolve it, i.e.,ISR BIandISR MSRRN , re-spectively. The top row shows the absolute difference (AD) in the luminance channel, and the bottom row shows the pixel number at different AD intervals. From the top row, one could observe that i) both BI and MSRRN achieve better results in the low-than high-frequency regions; ii) MSRRN performs significantly better than BI in the high-frequency regions while slightly better in the low ones. From the bottom row, one could see that iii) the pixel dis-tribution w.r.t. the low-and high-frequency region is long-tailed, i.e., the number of pixels in the low-frequency regions is far more than that in the high-frequency ones. Clearly, such an imbalanced pixel distribution necessarily results in the twin fitting problem, i.e., overfitting majority pixels in the low-frequency region while underfitting minority pixels in the high-frequency one. low-frequency ones in the natural image. To alleviate that, various SR methods have been proposed following the be-low two paradigms, i.e., developing generalized models with larger capacities [31,36] or specific models with high-frequency enhancements [37,48]. The former obtains better results in both the high-and low-frequency regions via con-stantly enlarging the capacities, while the latter enhances the high-frequency regions through specific auxiliary sub-networks, loss functions, training strategies, etc. Although the promising results have been obtained, they involve the following three limitations. First, the large capacity models take a lot of time and computations in the training and infer-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14327 ring, which is unavailable to mobile scenarios. Second, the specific models need ingenious designs about the architec-ture and training strategy, which is difficult to training and prone to artifacts. Third, they don’t dive into the problem and give a reasonable explanation, thus alleviate the prob-lem not in the most cost-effective way. In this paper, we dive into the problem and explain it from a machine learning perspective, i.e., the twin fitting problem caused by the long-tailed pixel distribution in the natural images. Taking the Fig. 1 as an example, the num-ber of pixels in the low-frequency region is far more than that in the high-frequency one, i.e., the long-tailed pixel dis-tribution. Since majority pixels in the low-frequency region dominate minority pixels in the high-frequency one, the gra-dients of SR model are mainly from the former instead of the latter. As a result, the SR model is optimized to mainly fit the pixels in the low-frequency region, and thus over-fitting them while underfitting those in the high-frequency region, i.e., the twin fitting problem. Motivated by the above explanation, we reformulate SR as the long-tailed distribution learning problem. With this reformulation, the twin fitting problem could be alleviated during training in a model-agnostic way, and thus applicable to different SR models. However, although the long-tailed distribution learning problem has been extensively studied in high-level vision tasks, there are few works on it in low-level ones. Therefore, we bridge the gaps of the problem between in low-and high-level vision ones, and design a simple and effective solution to verify the feasibility of our reformulation. To be specific, we design a novel long-tailed distribution learning method for SR, termed as Focal Pixel Learning (FPL), which adaptively re-weights the loss con-tribution of pixels by combining two complementary struc-ture priors. In this way, the gradients of SR model could be rebalanced, leading it to achieve better balance on the fitting of the high-and low-frequency regions. The contributions of this work are summarized below. For the first time, this work dives into the observation that the high-frequency regions are harder to be super-resolved than the low-frequency ones, and gives a rea-sonable explanation, i.e., the long-tailed pixel distribu-tion and it caused twin fitting problem. With our explanation, this work reformulates SR as a long-tailed distribution learning problem and designs a novel solution to verify its feasibility, which could be the first long-tailed distribution learning solution for SR, as far as we know. Extensive analyses and experiments are conducted to demonstrate the explanation, verify the reformulation, and validate the solution. The results demonstrate that our works could consistently improve the performance of SR models with different complexities.2. Related Works Here, we briefly review the related works of image super resolution and long-tailed distribution learning.
Hong_Watch_or_Listen_Robust_Audio-Visual_Speech_Recognition_With_Visual_Corruption_CVPR_2023
Abstract This paper deals with Audio-Visual Speech Recognition (AVSR) under multimodal input corruption situations where audio inputs and visual inputs are both corrupted, which is not well addressed in previous research directions. Previ-ous studies have focused on how to complement the cor-rupted audio inputs with the clean visual inputs with the assumption of the availability of clean visual inputs. How-ever, in real life, clean visual inputs are not always acces-sible and can even be corrupted by occluded lip regions or noises. Thus, we firstly analyze that the previous AVSR mod-els are not indeed robust to the corruption of multimodal input streams, the audio and the visual inputs, compared to uni-modal models. Then, we design multimodal input cor-ruption modeling to develop robust AVSR models. Lastly, we propose a novel AVSR framework, namely Audio-Visual Reliability Scoring module (AV-RelScore), that is robust to the corrupted multimodal inputs. The AV-RelScore can de-termine which input modal stream is reliable or not for the prediction and also can exploit the more reliable streams in prediction. The effectiveness of the proposed method is evaluated with comprehensive experiments on popular benchmark databases, LRS2 and LRS3. We also show that the reliability scores obtained by AV-RelScore well reflect the degree of corruption and make the proposed model fo-cus on the reliable multimodal representations.
1. Introduction Imagine you are watching the news on Youtube. Whether the recording microphone is a problem or the video encoding is wrong, the anchor’s voice keeps breaking off, so you cannot hear well. You try to understand her by her lip motions, but making matters worse, the microphone keeps covering her mouth, so the news is hardly recognizable. These days, people often face these kinds of situations, even in video conferences or interviews where the internet situa-*Both authors have contributed equally to this work. †Corresponding authortion cuts in and out. As understanding speech is the core part of human com-munication, there have been a number of works on speech recognition [1,2], especially based on deep learning. These works have tried to enhance audio representation for recog-nizing speech in a noisy situation [3–6] or to utilize addi-tional visual information for obtaining complementary ef-fects [7–12]. Recently, even technologies that comprehend speech from only visual information have been developed [13–21]. With the research efforts, automatic speech recogni-tion technologies including Audio Speech Recognition (ASR), Visual Speech Recognition (VSR), and Audio-Visual Speech Recognition (A VSR) are achieving great de-velopments with outstanding performances [22–24]. With the advantages of utilizing multimodal inputs, audio and vi-sual, A VSR that can robustly recognize speech even in a noisy environment, such as in a crowded restaurant, is ris-ing for the future speech recognition technology. However, the previous studies have mostly considered the case where the audio inputs are corrupted and utilizing the additional clean visual inputs for complementing the corrupted audio information. Looking at the case, we come up with an im-portant question, what if both visual and audio information are corrupted, even simultaneously? In real life, like the aforementioned news situation, cases where both visual and audio inputs are corrupted alternatively or even simultane-ously, are frequently happening. To deal with the question, we firstly analyze the robustness of the previous ASR, VSR, and A VSR models on three different input corruption situa-tions, 1) audio input corruption, 2) visual input corruption, and 3) audio-visual input corruption. Then, we show that the previous A VSR models are not indeed robust to audio-visual input corruption and show even worse performances than uni-modal models, which is eventually losing the ben-efit of utilizing multimodal inputs. To maximize the superiority of using multimodal sys-tems over the uni-modal system, in this paper, we propose a novel multimodal corruption modeling method and show its importance in developing robust A VSR technologies for This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18783 diverse input corruption situations including audio-visual corruption. To this end, we model the visual corruption with lip occlusion and noises that are composed of blurry frames and additive noise perturbation, along with the au-dio corruption modeling. Then, we propose a novel A VSR framework, namely Audio-Visual Reliability Scoring mod-ule (A V-RelScore), that can evaluate which modal of the current input representations is more reliable than others. The proposed A V-RelScore produces the reliability scores for each time step, which represent how much the current audio features and the visual features are helpful for rec-ognizing speech. With the reliability scores, meaningful speech representations can be emphasized at each modal stream. Then, through multimodal attentive encoder, the emphasized multimodal representations are fused by con-sidering inter-modal relationships. Therefore, with the A V-RelScore, the A VSR model can refer to the audio stream when the given visual stream is determined as less reliable (i.e., corrupted), and vice versa. We provide the audio-visual corruption modeling for the reproducibility and the future research.1 Our key contributions are as follows: • To the best of our knowledge, this is the first attempt to analyze the robustness of deep learning-based A VSR under the corruption of multimodal inputs including lip occlusions. • We propose an audio-visual corruption modeling method and show that it is key for developing robust A VSR technologies under diverse environments. • We propose Audio-Visual Reliability Scoring module (A V-RelScore) to figure out whether the current input modal is reliable or not, so that to robustly recognize the input speech even if one modality is corrupted, or even both. • We conduct comprehensive experiments with ASR, VSR, and A VSR models to validate the effectiveness of the proposed audio-visual corruption modeling and A V-RelScore on LRS2 [25] and LRS3 [26], the largest audio-visual datasets obtained in the wild.
Gao_VisFusion_Visibility-Aware_Online_3D_Scene_Reconstruction_From_Videos_CVPR_2023
Abstract We propose VisFusion, a visibility-aware online 3D scene reconstruction approach from posed monocular videos. In particular, we aim to reconstruct the scene from volumetric features. Unlike previous reconstruction meth-ods which aggregate features for each voxel from input views without considering its visibility, we aim to improve the feature fusion by explicitly inferring its visibility from a similarity matrix, computed from its projected features in each image pair. Following previous works, our model is a coarse-to-fine pipeline including a volume sparsification process. Different from their works which sparsify voxels globally with a fixed occupancy threshold, we perform the sparsification on a local feature volume along each visual ray to preserve at least one voxel per ray for more fine de-tails. The sparse local volume is then fused with a global one for online reconstruction. We further propose to pre-dict TSDF in a coarse-to-fine manner by learning its resid-uals across scales leading to better TSDF predictions. Ex-perimental results on benchmarks show that our method can achieve superior performance with more scene details. Code is available at: https://github.com/huiyu-gao/VisFusion
1. Introduction 3D scene reconstruction from RGB videos is a critical task in 3D computer vision, which finds its broad appli-cations in augmented reality (AR), robot navigation and human-robot interaction. These applications require accu-rate, complete and real-time 3D reconstruction of scenes. While state-of-the-art SLAM systems [3, 31] can track the camera motion accurately by leveraging both visual and in-ertial measurements in an unknown environment, the recon-structed map from a SLAM system only contains sparse point clouds such that dense reconstruction from monocular videos remains as a challenging problem. Many previous methods [1, 18] assume the observation of the whole video sequence for the reconstruction, which is not practical for online applications like VR games. In this paper, we follow [26] to propose an online 3D reconstruc-tion method. Given input images, most earlier 3D recon-struction methods [23,35] adopt a two-stage pipeline, which first estimates the depth map for each keyframe based on multi-view stereo (MVS) algorithms [11,14,29,32] and then fuses the estimated depth maps into a Truncated Signed Dis-tance Function (TSDF) volume [19]. The Marching Cubes algorithm [16] is then used to extract the 3D mesh. How-ever, those two-stage pipelines struggle to produce glob-ally coherent reconstruction since each depth map is esti-mated separately [26], especially for low texture regions like walls whose depth values are extremely hard to esti-mate with only several local views. To address this, more recent works [2,26,33] propose to fuse image features into a global 3D volume and directly regress TSDF [26,33] or oc-cupancy [2] given the feature volume. Such strategy allows for an end-to-end global surface reconstruction. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17317 The problem of occlusion naturally arises for global fea-ture fusion. Previous methods [2, 26] either completely ig-nore it by simply averaging the multi-view features [26] for each voxel or implicitly model the visibility via the atten-tion mechanism [2]. However, without explicit supervision, such attention cannot guarantee to encode the correct visi-bility. In this paper, we thus propose to explicitly predict the visibility weights of all views for each voxel with ground truth supervision. In addition, voxels will be considered vis-iblein at least one view in [2] due to the normalization of the attention mechanism, while in our method, empty vox-els and fully occluded voxels are invisible in any view to avoid introducing noises. Specifically, given a fragment of a video sequence observing the same 3D region, we first project each 3D voxel onto different view images to obtain 2D features. We then compute the pair-wise similarities of these features. Since features of the same occupied voxel are often similar across views, such similarity map naturally encodes the information of whether a 3D voxel is visible at a particular camera view or not (see Fig. 4). We thus use this similarity map to predict visibility weights. For volumetric-based methods, it is common practice to adopt a coarse-to-fine pipeline [2,18,25,26]. One of its key steps is voxel sparsification which eliminates empty voxels at coarse level for better performance and smaller memory consumption. To the best of our knowledge, previous meth-ods [2,18,25,26] propose to globally sparsify the volume by removing voxels whose occupancy probabilities are lower than a predefined threshold. However, such fixed threshold tends to sparsify more voxels than necessary, especially to remove voxels covering thin structures such as chair legs. At coarse level where the thin structure only occupies a small portion of the voxel, the features of such thin struc-ture are likely ignored leading to low occupancy probability prediction and resulting in the removal of such voxel. How-ever, such voxel should rank highly, based on the occupancy probability, among voxels along the visual ray defined by the pixel observing this thin structure. Inspired by this, we introduce a novel ray-based sparsification process. In par-ticular, for any image, we first cast a ray from every pixel to get the voxels this ray passes. For each ray, we then keep voxels with top occupancy scores to next level. Unlike pre-vious works [2, 18, 25, 26] that sparsify the global volume, our ray-based sparsification is performed on local 3D vol-ume. Our ray-based sparsifying strategy allows us to retain more surface voxels to the next level leading to a more com-plete reconstruction. Furthermore, previous coarse-to-fine methods [2, 18, 25, 26] directly regress the TSDF at each level discarding the relationships between the TSDF predicted at coarse and that at fine level. In our method, at each fine level, we aim to pre-dict a residual between the TSDF volume upsampled from the coarser level and that of the fine level, which is shownto be more accurate in TSDF estimation. In summary, our contributions are (i) a visibility-aware feature fusion module which explicitly predicts visibility weights used for feature aggregation for voxels; (ii) a ray-based voxel sparsifying algorithm which leads to the recon-struction of more scene structure details. (iii) an easier way of TSDF regression by learning the residual to the upsam-pled coarse TSDF volume for improved TSDF estimation. Our model outperforms the existing online feature fusion based methods.
Huang_Feature_Shrinkage_Pyramid_for_Camouflaged_Object_Detection_With_Transformers_CVPR_2023
Abstract Vision transformers have recently shown strong global context modeling capabilities in camouflaged object detec-tion. However, they suffer from two major limitations: less effective locality modeling and insufficient feature aggre-gation in decoders, which are not conducive to camou-flaged object detection that explores subtle cues from in-distinguishable backgrounds. To address these issues, in this paper, we propose a novel transformer-based Feature Shrinkage Pyramid Network (FSPNet), which aims to hi-erarchically decode locality-enhanced neighboring trans-former features through progressive shrinking for camou-flaged object detection. Specifically, we propose a non-local token enhancement module (NL-TEM) that employs the non-local mechanism to interact neighboring tokens and explore graph-based high-order relations within tokens to enhance local representations of transformers. Moreover, we design a feature shrinkage decoder (FSD) with adja-cent interaction modules (AIM), which progressively ag-gregates adjacent transformer features through a layer-by-layer shrinkage pyramid to accumulate imperceptible but effective cues as much as possible for object information decoding. Extensive quantitative and qualitative experi-ments demonstrate that the proposed model significantly outperforms the existing 24 competitors on three challeng-ing COD benchmark datasets under six widely-used evalu-ation metrics. Our code is publicly available at https: //github.com/ZhouHuang23/FSPNet .
1. Introduction Camouflage is a common defense or tactic in organ-isms that “perfectly” blend in with their surroundings to deceive predators (prey) or sneak up on prey (hunters). Camouflaged object detection (COD) [11] aims to segment camouflaged objects in the scene and has been widely ap-†Equal contributions. *Corresponding author: Tian-Zhu Xiang . Image GT Ours ZoomNet SINetV2Small Large Occluded Multiple UncertainFigure 1. Visual comparison of COD in different challeng-ing scenarios , including small, large, multiple, occluded and boundary-uncertain camouflaged objects. Compared with the re-cently proposed ZoomNet [30] and SINet-v2 [10], our method pro-vides superior performance with more accurate object localization and more complete object segmentation, mainly due to the pro-posed locality-enhanced global context exploration and progres-sive shrinkage decoder. plied in species conservation [29], medical image segmen-tation [5, 20], and industrial defect detection [3], etc. Due to the high similarities between camouflaged objects and their backgrounds, camouflaged objects are usually in-conspicuous and indistinguishable, which brings great chal-lenges to accurate detection. Recently, the development of deep learning and the availability of large-scale COD datasets ( e.g., COD10K [11]) have significantly advanced camouflaged object detection. Numerous deep learning-based methods have been proposed, which can be roughly divided into three categories: targeted design of feature ex-ploration modules, multi-task joint learning frameworks, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5557 and bio-inspired methods. Although these methods have made remarkable progress, they mainly rely heavily on con-volutional neural networks (CNNs), which cannot capture long-range dependencies due to the limited receptive fields, resulting in inferior performance for COD. As shown in Fig. 1, recently proposed state-of-the-art CNN-based meth-ods ( e.g., ZoomNet [30] and SINet-v2 [10]) fail to explore global feature relations and thus often provide predictions of incomplete object regions, especially for multiple ob-jects, large objects and occlusion cases. Although larger convolution kernels or simply stacking multiple convolu-tion layers with small kernels can enlarge receptive fields and thus alleviate this issue to some extent, it also dramat-ically increases the computational cost and the number of network parameters. Furthermore, studies [34] have shown that simply network deepening is ineffective for long-range dependency modeling. Compared to CNNs, vision transformers (ViT) [7], which have recently been introduced into computer vision and demonstrated significant breakthroughs in various vi-sion applications [17], can efficiently model long-range de-pendencies with the self-attention operations and thus over-come the above drawbacks of CNNs-based models. Re-cently, the works of [47] and [24] have attempted to accom-modate transformers for COD and shown promising per-formance. These methods either employ transformer as a network component for feature decoding or utilize the off-the-shelf vision transformers as backbones for feature en-coding. Through a thorough analysis of these methods for COD, we observe two major issues within existing tech-niques: 1) Less effective local feature modeling for trans-former backbones. We argue that both global context and local features play essential roles in COD tasks. However, we observe that most transformer-based methods lack a lo-cality mechanism for information exchange within local re-gions. 2) Limitations of feature aggregation in decoders. Existing decoders (shown in Fig. 2 (a)-(d)) usually directly aggregate the features with significant information differ-ences ( e.g., low-level features with rich details and high-level features with semantics), which tends to discard some inconspicuous but valuable cues or introduce noise, result-ing in inaccurate predictions. This is a big blow for the task of identifying camouflaged objects from faint clues. To this end, in this paper, we propose a novel transformer-based Feature Shrinkage Pyramid Network, named FSPNet , which aims to hierarchically decode neigh-boring transformer features which are locality-enhanced global representations for camouflaged objects through pro-gressive shrinking, thereby excavating and accumulating rich local cues and global context of camouflaged objects in our encoder and decoder for accurate and complete camou-flaged object segmentation. Specifically, to complement lo-cal feature modeling in the transformer encoder, we proposea non-local token enhancement module (NL-TEM) which employs the non-local mechanism to interact neighboring similar tokens and explore graph-based high-level relations within tokens to enhance local representations. Further-more, we design a feature shrinkage decoder (FSD) with adjacent interaction modules (AIMs) which progressively aggregates adjacent transformer features in pairs through a layer-by-layer shrinkage pyramid architecture to accumu-late subtle but effective details and semantics as much as possible for object information decoding. Owing to the global context modeling of transformers, locality explo-ration within tokens and progressive feature shrinkage de-coder, our proposed model achieves state-of-the-art perfor-mance and provides an accurate and complete camouflaged object segmentation. Our main contributions are summa-rized as follows: • We propose a non-local token enhancement module (NL-TEM) for feature interaction and exploration be-tween and within tokens to compensate for locality modeling of transformers. • We design a feature shrinkage decoder (FSD) with the adjacent interaction module (AIM) to better aggregate camouflaged object cues between neighboring trans-former features through progressive shrinking for cam-ouflaged object prediction. • Comprehensive experiments show that our proposed FSPNet achieves superior performance on three widely-used COD benchmark datasets compared to 24 existing state-of-the-art methods.
Ge_Improving_Zero-Shot_Generalization_and_Robustness_of_Multi-Modal_Models_CVPR_2023
Abstract Multi-modal image-text models such as CLIP and LiT have demonstrated impressive performance on image clas-sification benchmarks and their zero-shot generalization ability is particularly exciting. While the top-5 zero-shot accuracies of these models are very high, the top-1 accu-racies are much lower (over 25% gap in some cases). We investigate the reasons for this performance gap and find that many of the failure cases are caused by ambiguity in the text prompts. First, we develop a simple and efficient zero-shot post-hoc method to identify images whose top-1 prediction is likely to be incorrect, by measuring consis-tency of the predictions w.r.t. multiple prompts and image transformations. We show that our procedure better pre-dicts mistakes, outperforming the popular max logit base-line on selective prediction tasks. Next, we propose a simple and efficient way to improve accuracy on such uncertain im-ages by making use of the WordNet hierarchy; specifically we augment the original class by incorporating its parent and children from the semantic label hierarchy, and plug the augmentation into text prompts. We conduct experiments on both CLIP and LiT models with five different ImageNet-based datasets. For CLIP , our method improves the top-1 accuracy by 17.13% on the uncertain subset and 3.6% on the entire ImageNet validation set. We also show that our method improves across ImageNet shifted datasets, four other datasets, and other model architectures such as LiT. The proposed method1is hyperparameter-free, requires no additional model training and can be easily scaled to other large multi-modal architectures. Code is available athttps://github.com/gyhandy/Hierarchy-CLIP .
1. Introduction Vision-language multi-modal models trained on large-scale data have achieved significant success in numerous domains and have demonstrated excellent zero-shot gener-alization ability [7, 12, 18, 19, 20, 28]. Given a test image and a set of candidate class labels, one can compute the similarity between the embedding of the image and the em-bedding of each candidate class labels, and predict the class 1Work carried out mainly at Googleas the one with the highest similarity. The zero-shot top-1 accuracy for ImageNet [4] using CLIP variants (CLIP ViT-L) matches the performance of the original ResNet model trained from scratch. Recently, CLIP has been found to be more robust to distribution shift than ResNet, achieving good performance on ImageNet-V2 [21], ImageNet-R [9], ImageNet-A [11], and ImageNet-Sketch [25]. We noticed a large gap between the top-1accuracy and top-5accuracy, 64.2% vs. 89.4% respectively, revealing potential headroom for improvement. We investigated the cases where the top-1prediction was incorrect but the top-5 prediction was correct, and identified several typical failure modes. Despite the well-known multi-label issues in Ima-geNet [1], we found many of the remaining failure cases are caused by noise and ambiguous text prompts related to the WordNet hierarchical structure of ImageNet. Some class names are quite general so that the model cannot correctly match images from their specific subclasses. For example, the hot-air balloon images belonging to the “balloon” class were misclassified as “airship”, see Figure 1 middle. On the other hand, some class names are too specific such that the model fails to correlate them with their more generic super-classes. For example, 96% of images with ground truth label “tusker” are wrongly classified as other elephant classes such as “Asian elephant”, see Figure 1 left. The fail-ure modes analysis suggests that the text encoder is very sensitive to inputs and as a result, the overall classification lacks robustness. Inspired by these observations, we propose to first iden-tify the subset of images whose top-1 prediction is likely to be incorrect, and then improve the accuracy for those images by a principled framework to augment their class labels by WordNet hierarchy. To estimate whether an im-age has an incorrect prediction, i.e., to estimate the predic-tion confidence, we use the consistency of predictions under different text prompt templates and image augmentations as a signal for prediction confidence estimation. Although prediction confidence estimation has been well studied in single-modal classification models, we found those com-monly used confidence scores, maximum softmax proba-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11093 bility [10] and maximum logit score [8], are not always re-liable for the multi-modal CLIP and LiT models due to the poor calibration of the logits scores. For example, among the 1K classes in ImageNet, the class with the greatest mean logit value (computed as the cosine similarity between im-age and text embeddings) is “fig” (the fruit). Though we don’t have access to CLIP private training data, we hypoth-esize that this might be due to “fig” being a common abbre-viation for “figure”, which frequently occurs in the training data and thus includes many non-fruit illustrations. In this work, we first propose a simple yet efficient zero-shot confidence estimation method better suited for CLIP, based on predictions’ self-consistency over different text prompts and image perturbations. [26] proposed using self-consistency among multiple model outputs to improve the reasoning accuracy of large language models. Here we extend the idea for confidence estimation in multi-modal models by measuring consistency of predictions under mul-tiple input text prompts and image transformations . Our method is effective at predicting mistakes; the identified low confidence subset has significantly lower top-1 accu-racy (21.58%) than the average accuracy (64.18%). Next, to improve the accuracy for the low confidence subset, we develop a label augmentation technique using Word-Net label hierarchy. Our method leverages semantic in-formation from ancestors (top-down) as well as children (bottom-up) and improves the top-1 accuracy of the subset to 38.71% (17.13% improvement). Our method not only improves model accuracy, but also model robustness, im-proving on ImageNet variants with distribution shift such as ImageNet-v2, ImageNet-R, ImageNet-Adversarial and Imagenet-Sketch. The main contributions of this work are: • We identified several failure modes for zero-shot Im-ageNet classification using multi-modal models, and our findings suggest that the text encoder is very sen-sitive to prompts. To improve the prediction accuracy, prompts need to be better designed. • We propose a simple yet efficient zero-shot confidence score that is better suited for multi-modal models, based on predictions’ self-consistency under different text prompts and image perturbations. • We develop a label augmentation technique that uses both ancestor and children labels from WordNet. By applying the label augmentation to the previously iden-tified low confidence subset of images, we signifi-cantly improve their prediction accuracy.
Guo_Improving_Robustness_of_Vision_Transformers_by_Reducing_Sensitivity_To_Patch_CVPR_2023
Abstract Despite their success, vision transformers still remain vulnerable to image corruptions, such as noise or blur. In-deed, we find that the vulnerability mainly stems from the unstable self-attention mechanism, which is inherently built upon patch-based inputs and often becomes overly sensi-tive to the corruptions across patches. For example, when we only occlude a small number of patches with random noise (e.g., 10%), these patch corruptions would lead to se-vere accuracy drops and greatly distract intermediate at-tention layers. To address this, we propose a new training method that improves the robustness of transformers from a new perspective – reducing s ensitivity to p atch c orruptions (RSPC) . Specifically, we first identify and occlude/corrupt the most vulnerable patches and then explicitly reduce sen-sitivity to them by aligning the intermediate features be-tween clean and corrupted examples. We highlight that the construction of patch corruptions is learned adversarially to the following feature alignment process, which is partic-ularly effective and essentially different from existing meth-ods. In experiments, our RSPC greatly improves the sta-bility of attention layers and consistently yields better ro-bustness on various benchmarks, including CIFAR-10/100-C, ImageNet-A, ImageNet-C, and ImageNet-P .
1. Introduction Despite the success of vision transformers [10] in recent years, they still lack robustness against common image cor-ruptions [24, 52], such as noise or blur, and adversarial per-turbations [13, 15, 42]. For example, even for the state-of-the-art robust architectures, e.g., RVT [34] and FAN [61], the accuracy drops by more than 15% on corrupted exam-ples, e.g., with Gaussian noise, as shown in Figure 2 (blue star on the right). We suspect that this vulnerability is in-herent to the used self-attention mechanism, which relies on patch-based inputs and may easily become overly sensitive to corruptions or perturbations upon them. Figure 1. Sensitivity to patch perturbations/corruptions in terms of confidence score of the ground-truth class. We randomly select 10% patches to be perturbed/corrupted for RVT-Ti [34]. In prac-tice, adversarial patch perturbations (often invisible) significantly reduce the confidence, indicating the high sensitivity of transform-ers to patches. However, directly adding random noise only yields marginal degradation even with the highest severity in ImageNet-C [24]. By contrast, occluding patches with noise greatly reduces the confidence and can be used as a good proxy of adversarial patch perturbations to reveal the patch sensitivity issue. A piece of empirical evidence is that transformers can be easily misled by the adversarial perturbations only on very few patches (even a single patch [13]). As shown in Figure 1, given a clean image, we randomly sample a small number of patches, e.g., 10%, and introduce pertur-bations/corruptions into them. Considering RVT [34] as a strong baseline, when we generate adversarial perturba-tions using PGD-5, these perturbed patches greatly reduce the confidence score from 63.8% to 3.1% and result in a misclassification. Nevertheless, generating adversarial per-turbations can be very computationally expensive (e.g., 5 × longer training time for PGD-5), which makes adversarial training often infeasible on large-scale datasets [26, 39, 53], e.g., ImageNet. Instead, an efficient way is directly adding corruptions, e.g., random noise, on top of these patches. In practice, even with the highest severity in ImageNet-C [24], these corrupted patches only yield a marginal degradation in terms of confidence score. Thus, how to construct patch corruptions that greatly misleads the model and can be pro-duced very efficiently becomes a critical problem. Interestingly, if we totally discard these patches and oc-clude them with random noise, the model becomes very vul-nerable again, e.g., with the confidence score dropping from 63.8% to 17.3% in Figure 1. More critically, these corrupted This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4108 Figure 2. Sensitivity to patch-based corruptions in terms of attention stability (left) and accuracy (right). Left: We randomly occlude 10% patches with noise and show the attention maps of different layers in RVT-Ti [34] and our RSPC-RVT-Ti. Following [13], we choose the center patch (red square) as the query and average the attention scores across all the attention heads for visualization. Regarding this example, we also compute the average cosine similarity (Cos-Sim) between the clean and corrupted attentions across different layers. Clearly, our RSPC model yields more stable attention maps. Right : On ImageNet, we plot the distribution of accuracy on the occluded examples with different occlusion masks. Here, we randomly sample 100 different masks for each image. We show that RVT is very sensitive to the patch-based corruptions and has a much larger variance of accuracy than our RSPC model. patches also have significant impact on the attention maps across layers, as shown in Figure 2 (left). We suspect this to be the case due to the global interactions across tokens in the attention mechanism – even when occluding only few patches. Quantitatively, this can be captured by comput-ing the average cosine similarity between the attentions on clean and corrupted images across layers, denoted by Cos-Sim. Regarding the considered example in Figure 2, the Cos-Sim of only 0.43for RVT indicates a significant shift in attention – a phenomenon that we can observe across the en-tire ImageNet dataset (see Figure 5). In fact, these attention shifts also have direct and severe impact on accuracy: In Figure 2 (right), we randomly sample 100occlusion masks for each image and show the distribution of accuracy (blue box). Unsurprisingly, the accuracy decreases significantly when facing patch-based corruptions, compared to the orig-inal examples (blue star). These experiments highlight the need for an inherently more robust attention mechanism in order to improve the overall robustness of transformers. We address this problem by finding particularly vulnera-ble patches to construct patch-based corruptions and stabi-lizing the intermediate attention layers against them. Since we use random noise to occlude patches, we move the focus from how to perturb patch content to finding which patch should be occluded. As shown in Figure 2 (right), with a fixed occlusion ratio, the accuracy varies a lot when occlud-ing different patches (e.g., ranging from 60% to 75% in the blue box). Since we seek to reduce the sensitivity to patch corruptions, occluding the most vulnerable (often very im-portant) patches and explicitly reducing the impact of them should bring the largest robustness improvement. Inspired by this, we seek to identify the most vulnerable patches to construct patch-based corruptions and then align the inter-mediate features to make the attention less sensitive to the corruptions in individual patches. In practice, we are able to reduce the impact of patch-based corruptions significantly,improving the Cos-Sim from 0.43(for RVT-Ti) to 0.91in Figure 2 (left). This is also directly observed in the visual results where these corruptions have little impact on the in-termediate attention maps of our robust model. The stable attention mechanism also greatly improves the robustness of transformers. As shown in Figure 2 (right), compared with RVT, we obtain significantly higher accuracy when facing examples with different occlusion masks (red box), alongside the improved overall accuracy and robustness on full images (red star). Contributions: In this paper, we study the sensitivity of transformers to patch corruptions and explicitly stabi-lize models against them to improve the robustness. Here, we make three key contributions: 1)We propose a new training method that improves robustness by reducing sen-sitivity to patch corruptions (RSPC) . To this end, we first construct effective patch-based corruptions and then reduce the sensitivity to them by aligning the intermedi-ate features. 2)When constructing patch corruptions, we develop a patch corruption model to find particularly vul-nerable patches that severely distract intermediate attention layers. In practice, the corruption model is trained adver-sarially to the classification model, which, however, is es-sentially different from adversarial training methods. To be specific, we only learn which patch should be corrupted in-stead of the pixel-level perturbations. 3)In experiments, we demonstrate that the robustness improvement against patch corruptions (shown in Figure 2 (right)) can generalize well to diverse architectures on various robustness benchmarks, including ImageNet-A/C/P [24,60]. More critically, we can show, both qualitatively and quantitatively, that these im-provements stem from the more stable attention mechanism across layers. It is worth noting that, when compared with adversarial training methods, RSPC obtains a better tradeoff between accuracy and corruption robustness while keeping significantly lower training cost [57] (see Figure 7). 4109 PatchCorruption Model(Occlude Particularly Vulnerable Patches)FC BinarizeConv 16x16 Patch Embeddingℒ!"#$%(')ℒ!"#$%())ℒ!"#$%(*)Patch EmbeddingImage𝑥 Corrupted Image 𝑥"𝐿!"Random Noise 𝛿 FC + SoftmaxSelf-Attention⋯⋯Self-AttentionSelf-AttentionSelf-Attention FC + SoftmaxSelf-Attention⋯⋯Self-AttentionSelf-AttentionSelf-Attention Figure 3. Overview of the proposed reducing sensitivity to patch corruptions (RSPC) training procedure. We present a patch corruption model to produce patch-based corruptions and align the features of each self-attention block between the clean and corrupted examples (the alignment loss is highlighted by green box). Unlike existing methods, we select the patches to be occluded/corrupted in an adversarial way, i.e., corrupting the most vulnerable patches that would greatly distract intermediate attention layers.
He_MSF_Motion-Guided_Sequential_Fusion_for_Efficient_3D_Object_Detection_From_CVPR_2023
Abstract Point cloud sequences are commonly used to accurately detect 3D objects in applications such as autonomous driv-ing. Current top-performing multi-frame detectors mostly follow a Detect-and-Fuse framework, which extracts fea-tures from each frame of the sequence and fuses them to detect the objects in the current frame. However, this inevitably leads to redundant computation since adjacent frames are highly correlated. In this paper, we propose an efficient Motion-guided Sequential Fusion (MSF) method, which exploits the continuity of object motion to mine useful sequential contexts for object detection in the current frame. We first generate 3D proposals on the current frame and propagate them to preceding frames based on the estimated velocities. The points-of-interest are then pooled from the sequence and encoded as proposal features. A novel Bidi-rectional Feature Aggregation (BiFA) module is further pro-posed to facilitate the interactions of proposal features across frames. Besides, we optimize the point cloud pool-ing by a voxel-based sampling technique so that millions of points can be processed in several milliseconds. The pro-posed MSF method achieves not only better efficiency than other multi-frame detectors but also leading accuracy, with 83.12% and 78.30% mAP on the LEVEL1 and LEVEL2 test sets of Waymo Open Dataset, respectively. Codes can be found at https://github.com/skyhehe123/MSF .
1. Introduction 3D object detection [1, 2, 6, 7, 9, 14, 21, 27–29, 36] is one of the key technologies in autonomous driving, which helps the vehicle to better understand the surrounding environ-ment and make critical decisions in the downstream tasks. As an indispensable sensing device in autonomous driving systems, LiDAR collects 3D measurements of the scene in *Equal contribution. †Corresponding author. Points Pool Points PoolPoints Pool LSTM/ Transformer Tracker R-Net Points PoolPoints PoolPoints Pool R-NetDetect-and-Fuse MSF (ours) (a) (b)(current) Propagate(current) (current)Figure 1. (a) The “Detect-and-Fuse” framework extracts features from each frame of the sequence and then fuses them, while (b) our proposed “Motion-guided Sequential Fusion” (MSF) method generates proposals on the current frame and propagates them to preceding frames to explore useful contexts in the sequence. the form of point clouds. However, LiDAR can only pro-duce partial view of the scene at a time, and the sparse and incomplete representation of point clouds brings consider-able challenges to the 3D object detection task. In practice, the LiDAR sensor will continuously sense the environment and produce a sequence of point cloud frames over time. The multi-frame data can provide a denser representation of the scene as the vehicle moves. Therefore, how to fuse these multi-frame point cloud data for more accurate object detection is worth deep investigation. Recent works mainly focus on deep feature fusion with multi-frame point clouds, for example, aggregating dense birds-eye-view features via Transformer models [31, 37], passing the voxel features to LSTM [8] or GRU [32] mod-ules for temporal modeling. Some top-performing detectors [2, 18] focus on fusing proposal features, where a tracker is employed to associate the 3D proposals across frames, and a region-based network is applied to refine the cur-rent proposals by incorporating contextual features from the proposal trajectories. These approaches generally follow a “Detect-and-Fuse” framework, as shown in Fig. 1(a), where the model requires to process each frame of the sequence, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5196 and the predictions on the current frame rely on the results of preceding frames. Since online detection is a causal sys-tem, such a detection framework might cause significant de-lay if the network is still processing a preceding frame when the current frame is loaded. In this paper, we propose an efficient Motion-guided Sequential Fusion (MSF) method, as shown in Fig. 1(b), which leverages the continuity of object motion to extract useful contexts from point cloud sequences and improve the detection of current frame. Specifically, considering that the motions of objects are relatively smooth in a short sequence, we propagate the proposals generated on current frame to preceding frames based on the velocities of objects, and sample reliable points-of-interest from the sequence. In this way, we bypass extracting features on each frame of the sequence, which reduces the redundant computation and reliance on the results of preceding frames. The sampled points are then transformed to proposal features via two en-coding schemes and passed to a region-based network for further refinement. Specifically, a self-attention module is employed to enhance the interaction of point features within proposals, while a novel Bidirectional Feature Aggregation (BiFA) module is proposed to enforce the information ex-change between proposals across frames. The refined pro-posal features consequently capture both spatial details and long-term dependencies over the sequence, leading to more accurate bounding-box prediction. It is found that the existing point cloud pooling meth-ods [2, 19, 23, 30] are inefficient, taking more than 40 mil-liseconds when processing millions of points from sequen-tial point clouds. We find that the major bottleneck lies in the heavy computation of pair-wise distances between n points and mproposals, which costs O(nm)complexity. To further improve the efficiency, we optimize the point cloud pooling with a voxel sampling technique. The improved pooling operation is of linear complexity and can process millions of points in several milliseconds, more than eight times faster than the original method. Overall, our contributions can be summarized as follows. •An efficient Motion-guided Sequential Fusion (MSF) method is proposed to fuse multi-frame point clouds at region level by propagating the proposals of current frame to preceding frames based on the object motions. •A novel Bidirectional Feature Aggregation (BiFA) module is introduced to facilitate the interactions of proposal features across frames. •The point cloud pooling method is optimized with a voxel-based sampling technique, significantly reduc-ing the runtime on large-scale point cloud sequence. The proposed MSF method is validated on the challeng-ing Waymo Open Dataset, and it achieves leading accuracy on the LEVEL1 and LEVEL2 test sets with fast speed.2. Related Work Single-frame 3D object detection. Recent research on single-frame 3D object detection is mainly focused on rep-resentation learning on point clouds. V oxel-based detec-tors [28, 33, 36] rasterize the point cloud into volumetric representation, followed by 3D CNN to extract dense fea-tures. Some works convert point clouds into 2D birds-eye-view [9] or range view [4, 11] representations, and process them with more efficient 2D CNN. Following PointNet++ [17], point-based methods [16, 23, 29, 30, 34] directly pro-cess point clouds in continuous space, and extract highly-semantic features through a series of downsampling and set abstraction layers. V oxel-point approaches [7, 12, 21] em-ploy a hybrid representation, where the flexible conversion between voxel-based and point-based representations are explored, leading to better balance between efficiency and performance. Our method employs a high-quality voxel-based detector CenterPoint [33] as the proposal generation network to predict 3D proposals of current frame and their motions. We then employ an efficient region-based network to further refine these proposals by mining sequential points from point cloud sequence. 3D object detection from point cloud sequence. Multi-frame point clouds provide richer 3D information of the environment. While some single-frame detectors [3, 33] can be adapted to point cloud sequence by simply concate-nating multi-frame point cloud as the input, the improve-ments are typically marginal and the performance can be even worse when encountering moving objects. Fast-and-furious [13] explores an intermediate fusion to align multi-frame point cloud by concatenating the hidden feature maps of the backbone network. However, it still suffers from the misalignment brought by the fast-moving objects in long sequence. Recent approaches [8, 32] demonstrate that an in-depth fusion can be achieved with recurrent networks. Unfortunately, the use of a single memory to store and up-date features across frames builds a potential bottleneck. To resolve such limitations, 3D-MAN [31] first attempts to em-ploy the attention mechanism to align different views of 3D objects and then exploits a memory bank to store and ag-gregate multi-frame features for long sequence. Recently, Offboard3D [18] and MPPNet [2] improve much the detec-tion performance, where they associate the detected boxes from each frame of the sequence as proposal trajectories, and extract high-quality proposal features by sampling se-quential point cloud on the trajectories. Our MSF method also samples points from the sequence, but it differs from those methods with proposal trajectories [2, 18] in that we only generate proposals on the current frame and propagate them to explore features in preceding frames. This makes our method much more efficient and favorable to online detection systems. 5197 𝑲 𝑫 Proposal Features MHSA +FFN BiFA Decoder Detection Heads𝑻 x3 Point Cloud Sequence Region-based Network RPN Motion-guided Sequential PoolingRRepeat Max PoolingForward BackwardCurrent Frame Preceding Frame MM R M M RM Conv share Conv Conv share Conv Conv ConvBidirectional Feature Aggregation R Rshare shareFigure 2. The overall architecture of our proposed Motion-guided Sequential Fusion (MSF) approach. By taking a point cloud sequence as input, MSF employs a region proposal network to generate proposals on the current frame and sample points-of-interest from the sequence by using motion-guided sequential pooling. The sampled points are encoded as high-dimensional proposal features and passed to a region-based network, where three learning blocks are consequently applied to refine the proposal features. A Bidirectional Feature Aggregation (BiFA) module is introduced in the region-based network to facilitate the interactions of proposal features across frames. The red and blue cubes represent single-point features from the current frame and preceding frame, respectively. Table 1. Recall rates of foreground points by using per-frame detection based proposal trajectory method [2] and our motion guided proposal generation method. We employ the CenterPoint [33] as proposal generator and evaluate on Waymo validation split. 4-frame 8-frame 16-frame Trajectory [2] 93.2% 92.8% 90.5% Ours ( γ= 1.0)92.3% 87.5% 78.3% Ours ( γ= 1.1)93.5% 91.7% 87.3% 3. Motion-guided Sequential Fusion This section presents our Motion-guided Sequential Fu-sion (MSF) approach for efficient 3D object detection on point cloud sequences. The overall architecture of MSF is illustrated in Fig. 2. In Sec. 3.1, we describe the details of motion-guided sequential pooling, which effectively mines reliable sequential points-of-interest based on the proposals of current frame. In Sec. 3.2, we present the region-based network, including the formulation of proposal features and a novel bidirectional feature aggregation module. In Sec. 3.3, we demonstrate a voxel-based sampling technique to accelerate the current point cloud pooling method. 3.1. Motion-guided Sequential Pooling Current multi-frame detection methods [2,18] mostly ex-plore proposal trajectories to generate high-quality point cloud representations. However, such a scheme relies onframe-by-frame proposal generation, which is not suitable for online detection systems. We observe that in a point cloud sequence, although objects move at different speeds, their motions are relatively smooth. That is to say, we can estimate their motion displacements and roughly localize their positions in preceding frames. To this end, given a point cloud sequence {It}T t=1, we propose to propagate the proposals generated on the current frame ITto preceding frames {It}T−1 t=1based on their estimated velocities. Since moving objects may slightly deviate from the estimated po-sitions in the preceding frames, we sample the points-of-interest in a cylindrical region of each proposal and grad-ually increase the diameter of the region by a factor γas the proposal propagates. Let’s denote a proposal of current frame as (px, py, pz, w, l, h, θ ), where (px, py, pz)denotes its center location, w, l, h andθdenote its width, length, height and yaw angle, respectively. Suppose that the ob-ject has a unit-time velocity ⃗ v= (vx, vy). The correspond-ing points-of-interest (xt, yt, zt)sampled from frame twill satisfy the following condition: (xt−px+vx·∆t)2+ (yt−py+vy·∆t)2<(dt 2)2,(1) where ∆t=T−tis the time offset of frame tanddt=p (w2+l2)·γ∆t+1is the diameter of cylindrical region. In our preliminary study, we compare the overall re-call rates of foreground points between our
Bernasconi_Kernel_Aware_Resampler_CVPR_2023
Abstract Deep learning based methods for super-resolution have become state-of-the-art and outperform traditional ap-proaches by a significant margin. From the initial mod-els designed for fixed integer scaling factors (e.g. ×2 or ×4), efforts were made to explore different directions such as modeling blur kernels or addressing non-integer scaling factors. However, existing works do not provide a sound framework to handle them jointly. In this paper we pro-pose a framework for generic image resampling that not only addresses all the above mentioned issues but extends the sets of possible transforms from upscaling to generic transforms. A key aspect to unlock these capabilities is the faithful modeling of image warping and changes of the sam-pling rate during the training data preparation. This allows a localized representation of the implicit image degradation that takes into account the reconstruction kernel, the lo-cal geometric distortion and the anti-aliasing kernel. Using this spatially variant degradation map as conditioning for our resampling model, we can address with the same model both global transformations, such as upscaling or rotation, and locally varying transformations such lens distortion or undistortion. Another important contribution is the auto-matic estimation of the degradation map in this more com-plex resampling setting (i.e. blind image resampling). Fi-nally, we show that state-of-the-art results can be achieved by predicting kernels to apply on the input image instead of direct color prediction. This renders our model applicable for different types of data not seen during the training such as normals.
1. Introduction Thanks to recent advances in deep learning based super-resolution which allow to infer impressive high frequency details from low resolution inputs, it has become possible to bridge the gap between content and display resolution with-out noticeable degradation in quality. This is beneficial in different contexts and, among other things, has enabled new visual effects production workflows to operate in 2K while still ultimately delivering at 4K resolution by performing a 2x upscale just before final delivery. However, super-resolution is not the only image transfor-mation that can occur in typical visual effects pipelines, and it is very common to perform additional tasks such as im-age rectification, retargeting, lens (un)distortion or image This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22347 warping. All these transformations require more complex image resampling solutions. Even the simple case of lens undistortion corresponds to a more complex type of resam-pling — which might locally upscale or downscale — that existing super-resolution methods do not support. As a re-sult, one has to fall back to traditional interpolation based resampling approaches which can result in a noticeable and unnecessary loss in quality. To the best of our knowledge, there is only one learn-ing based method that considers more complex resam-plings [ 15]. However, this approach has two drawbacks: On the one hand it is not optimally suited for real world content that might suffer from different kinds of implicit degradations from different blur kernels. On the other hand the solution seems more complex than needed due to the multi-scale warping and blending strategy. In this paper we propose a framework for generic neural image resampling that is lean and better applicable to real world scenarios through handling implicit degradations. To achieve this, we build upon fundamental concepts of signal processing and decompose the resampling process into dif-ferent stages, namely reconstruction, geometric distortion, and anti-aliasing. With this, we are able to create proper training examples to better handle and interactively control spatially variant degradation maps that are expected in im-age resampling. In addition to this, we design our approach to be able to predict kernels instead of directly outputting color values which makes the model more robust and en-ables consistent resampling of other channels, such as nor-mals. Figure 1illustrates this with a complex example: the transformation consists of image rectification and an in-crease in image resolution. This is an image that was not downscaled and the blur kernel is unknown. Our method automatically estimates the degradation map and produces sharper results than existing methods. Additionally it’s pos-sible to directly create outputs at different sharpness levels. This is the first time such applications are possible in image resampling. Finally, we are able to show that our approach is able to beat the state-of-the-art despite its lean design allow-ing higher quality processing in parts of the visual effects pipeline that until now could not benefit from advances in deep learning.
Huang_Not_All_Image_Regions_Matter_Masked_Vector_Quantization_for_Autoregressive_CVPR_2023
Abstract Existing autoregressive models follow the two-stage gen-eration paradigm that first learns a codebook in the la-tent space for image reconstruction and then completes the image generation autoregressively based on the learned codebook. However, existing codebook learning simply models all local region information of images without dis-tinguishing their different perceptual importance, which brings redundancy in the learned codebook that not only limits the next stage’s autoregressive model’s ability to model important structure but also results in high train-ing cost and slow generation speed. In this study, we bor-row the idea of importance perception from classical im-age coding theory and propose a novel two-stage frame-work, which consists of Masked Quantization VAE (MQ-VAE) and Stackformer, to relieve the model from model-ing redundancy. Specifically, MQ-VAE incorporates an adaptive mask module for masking redundant region fea-tures before quantization and an adaptive de-mask mod-ule for recovering the original grid image feature map to faithfully reconstruct the original images after quantiza-tion. Then, Stackformer learns to predict the combination of the next code and its position in the feature map. Com-prehensive experiments on various image generation vali-date our effectiveness and efficiency. Code will be released athttps://github.com/CrossmodalGroup/ MaskedVectorQuantization .
1. Introduction Deep generative models of images have received signif-icant improvements over the past few years and broadly fall into two categories: likelihood-based models, which include V AEs [24], flow-based [36], diffusion models [17] and autoregressive models [40], and generative adversarial *Zhendong Mao is the corresponding author. Figure 1. Illustration of our motivation. (a) Existing works model all local regions without distinguishing their perceptual impor-tance in stage 1, which not only brings redundancy ( e.g., the textu-ral regions like the background) in the learned codebook but also make the autoregressive models overly focus on modeling this re-dundancy and hinder other important structural regions modeling. (b) The codebook learning in our method only includes the im-portant regions, e.g., the structural regions like corners and edges, since other unimportant ones can be restored even if missing, and thus autoregressive model could focus on modeling these impor-tant regions in stage 2 and results in better generation quality. networks (GANs) [14], which use discriminator networks to distinguish samples from generator networks and real ex-amples. Compared with GANs, likelihood-based models’ training objective, i.e., the negative log-likelihood (NLL) or its upper bound, incentives learning the full data distribution and allows for detecting overfitting. Among the likelihood-based models, autoregressive models have recently attracted increasing attention for their impressive modeling ability and scalability. Recent autore-gressive image generation [10, 12, 13, 28, 28, 34, 35, 37, 39] follows the two-stage generation paradigm, i.e., the first stage learns a codebook in the latent space for image recon-struction and the second stage completes the image genera-tion in the raster-scan [13] order by autoregressive models This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2002 based on the learned codebook. Since codebook learning in the first stage defines the discrete image representation for the next autoregressive modeling, a high-quality code-book is the key to generate high-quality images. Several recent works focus on improving the codebook learning in the first stage, e.g., VQGAN [13] introduces adversarial loss and perceptual loss. ViT-VQGAN [42] introduces a more expressive transformer backbone. RQ-V AE [28] introduces the residual quantization to reduce the resolution of the la-tent space. In general, the essence of existing codebook learning is the modeling of all local region information ( i.e., an8×8or16×16patch) of images in the dataset, without distinguishing their different perceptual importance. In this study, we point out that existing codebook learn-ing exists gaps with classical image coding theory [20, 25, 26], the basic idea of which is to remove redundant infor-mation by perceiving the importance of different regions in images. The image coding theory reveals that an ideal image coding method should only encode images’ percep-tually important regions ( i.e., which cannot be restored if missing) while discarding the unimportant ones ( i.e., which can be restored by other image regions even if missing). The neglect of considering such perceptual importance in exist-ing works poses problems in two aspects, as illustrated in Figure 1(a): (1) the existence of this large amount of repet-itive and redundant information brings redundancy to the learned codebook, which further makes the autoregressive model in the next stage overly focus on modeling this redun-dancy while overlooking other important regions and finally degrades generation quality. (2) the redundancy makes the autoregressive model need to predict more (redundant) quantized codes to generate images, which significantly in-creases the training cost and decreases the generating speed. Although the effectiveness and efficiency of image coding theory have been widely validated, how to introduce this idea into codebook learning remains unexplored. The key of applying image coding theory to codebook learning is to distinguish important image parts from unim-portant ones correctly. Considering that the essential dif-ference between these two sets lies in whether they can be restored if missing, we found that this distinction can be re-alized through the mask mechanism, i.e., the masked part is important if it cannot be faithfully restored, and otherwise unimportant. Based on the above observation, we thereby propose a novel two-stage generation paradigm upon the mask mechanism to relieve the model from modeling redun-dant information. Specifically, we first propose a Masked Quantization VAE (MQ-VAE) with two novel modules, i.e., anadaptive mask module for adaptively masking redun-dant region features before quantization, and an adaptive de-mask module for adaptively recovering the original grid image feature map to faithfully reconstruct original images after quantization. As for the adaptive mask module , it in-corporates a lightweight content-aware scoring network that learns to measure the importance of each image region fea-ture. The features are then ranked by the importance scores and only a subset of high-scored features will be quantized further. As for the adaptive de-mask module , we design a direction-constrained self-attention to encourage the in-formation flow from the unmasked regions to the masked regions while blocking the reverse, which aims to infer the original masked region information based on unmasked ones. Thanks to the adaptive mask and de-mask mecha-nism, our MQ-V AE removes the negative effects of redun-dant image regions and also shortens the sequence length to achieve both effectiveness and efficiency. Moreover, since different images have different impor-tant regions, the position of quantized codes in the feature map also dynamically changed. Therefore, we further pro-pose Stackformer for learning to predict the combination of both codes and their corresponding positions. Concretely, the proposed Stackformer stacks a Code-Transformer and a Position-Transformer, where the Code-Transformer learns to predict the next code based on all previous codes and their positions, and the Position-Transformer learns to pre-dict the next code’s position based on all previous codes’ positions and current code. With our method, as shown in Figure 1(b), the codebook learning only includes the important regions, e.g., the struc-tural regions, since unimportant ones like the background can be restored even if missing. And therefore the autore-gressive model in the second stage could focus on modeling these important regions and brings better generation quality. In a nutshell, we summarize our main contributions as: Conceptually , we point out that existing codebook learning ignores distinguishing the perceptual importance of different image regions, which brings redundancy that degrades generation quality and decreases generation speed. Technically , (i) we propose MQ-V AE with a novel adaptive mask module to mask redundant region features before quantization and a novel adaptive de-mask module to recover the original feature map after quantization; (ii) we propose a novel Stackformer to predict the combination of both codes and their corresponding positions. Experimentally , comprehensive experiments on various generations validate our effectiveness and efficiency, i.e., we achieve 8.1%, 2.3%, and 18.6% FID improvement on un-, class-, and text-conditional state-of-the-art at million-level parameters, and faster generation speed compared to existing autoregressive models.
Cho_Transformer-Based_Unified_Recognition_of_Two_Hands_Manipulating_Objects_CVPR_2023
Abstract Understanding the hand-object interactions from an egocentric video has received a great attention recently. So far , most approaches are based on the convolutional neural network (CNN) features combined with the temporal encoding via the long short-term memory (LSTM) or graph convolution network (GCN) to provide the unified understanding of two hands, an object and their interactions. In this paper , we propose the Transformer-based unified framework that provides better understanding of two hands manipulating objects. In our framework, we insert the whole image depicting two hands, an object and their interactions as input and jointly estimate 3 information from each frame: poses of two hands, pose of an object and object types. Afterwards, the action class defined by the hand-object interactions is predicted from the entire video based on the estimated information combined with the contact map that encodes the interaction between two hands and an object. Experiments are conducted on H2O and FPHA benchmark datasets and we demonstrated the superiority of our method achieving the state-of-the-art accuracy. Ablative studies further demonstrate the effectiveness of each proposed module.
1. Introduction Estimating poses and actions of an egocentric video involving two hands and an object is an important factor of various appli-cations such as augmented reality (AR), virtual reality (VR) and human computer interaction (HCI). Previously, there has been much progress in the hand pose estimation [3 –5,11,12,18,31,33, 38,43,53,61] and in the object 6D pose estimation [10,26,28,36, 51,57,58] separately from each other. Recently, there has been a surge in demand for understanding hand-object interactions, leading to the emergence of methods for joint pose estimation of hands and objects [22,23,39]. However, most methods focus on the separate problem either for the pose estimation [9,13,20,39] or for the interaction recognition [6,42,48]. Furthermore, most approaches developed the pose estimation method based on the already cropped tight bounding boxes of hands and objects which are not realistic. Therefore, the pose estimation accuracy open chips grab cappuccinoFigure 1. Example results of pose estimation and interaction recognition for two hands manipulating objects. Our method first estimates hand poses, object poses and object types. Then, interaction class is estimated using estimated information combined with contact maps. (Row 1) example input video vforopen chips andgrap cappuccino ; (Row 2) contact maps for left hand mLeft, object mO, and right hand mRight; (Row 3) estimated 3D poses of hands h, a 3D object pose oand the estimated interaction class a. is frequently affected by the performance of the detector. To tackle the issue, Tekin et al. [50] proposed an unified framework that estimates the 3D hand pose, the object 6D pose and their action classes. They developed the pose estimator extending the architecture of [45] towards 3D space and recognize actions using estimated hand and object poses. The long short-term memory (LSTM) [25]-based architecture is further used to map the information towards the action classes. Kwon et al. [32] further extended the framework towards involving two hands rather than one hand: They estimated 3D poses of two hands, 6D pose of an object and their action classes. The proposed method involves the graph convolutional network (GCN) to model the hand-object interaction considering the geometric relation between hand and object. In both works, estimated hand and object poses (i.e. skeletons) were used as the cue to the interaction recognition. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4769 In this paper, we propose the Transformer-based unified framework (H2OTR) to estimate poses of two hands, object pose, object types and interaction classes between hands and ob-ject. We construct the Transformer-based architecture similarly to [7, 60] and it is able to predict the poses from each frame without hand/object detectors or any additional post-processing such as non-maximal suppression (NMS). It also estimates hand-object interaction classes from the entire videos. We additionally exploit the contact map between hand and object meshes by recovering hand meshes from hand poses via inverse kinematics. We demonstrated that the contact map expresses the explicit relational information between hands and object and is used as the crucial cue for the hand-object interaction recognition task. We summarize our contributions in this paper as follows: •We propose the Transformer-based unified framework for estimating poses of two hands, object poses, object types and hand-object interaction classes at a single inference step. •We introduce a novel interaction recognition method which utilizes a contact map. To the best of our knowledge, this is the first work to exploit the contact map as a cue for interaction recognition. •We achieve the state-of-the-art performance in pose estimation and interaction recognition tasks using H2O [32] and FPHA [18] datasets.
Ando_RangeViT_Towards_Vision_Transformers_for_3D_Semantic_Segmentation_in_Autonomous_CVPR_2023
Abstract Casting semantic segmentation of outdoor LiDAR point clouds as a 2D problem, e.g., via range projection, is an effective and popular approach. These projection-based methods usually benefit from fast computations and, when combined with techniques which use other point cloud representations, achieve state-of-the-art results. Today, projection-based methods leverage 2D CNNs but recent ad-vances in computer vision show that vision transformers (ViTs) have achieved state-of-the-art results in many image-based benchmarks. In this work, we question if projection-based methods for 3D semantic segmentation can benefit from these latest improvements on ViTs. We answer posi-tively but only after combining them with three key ingre-dients: (a) ViTs are notoriously hard to train and require a lot of training data to learn powerful representations. By preserving the same backbone architecture as for RGB images, we can exploit the knowledge from long training on large image collections that are much cheaper to ac-quire and annotate than point clouds. We reach our best results with pre-trained ViTs on large image datasets. (b) We compensate ViTs’ lack of inductive bias by substituting a tailored convolutional stem for the classical linear em-bedding layer. (c) We refine pixel-wise predictions with a convolutional decoder and a skip connection from the con-volutional stem to combine low-level but fine-grained fea-tures of the the convolutional stem with the high-level but coarse predictions of the ViT encoder. With these ingre-dients, we show that our method, called RangeViT, out-performs existing projection-based methods on nuScenes and SemanticKITTI. The code is available at https:// github.com/valeoai/rangevit .
1. Introduction Semantic segmentation of LiDAR point clouds permits vehicles to perceive their surrounding 3D environment in-*This project was done during an internship at Valeo.ai. RGB Images Point Clouds Stem Decoder ViT Encoder LiDAR Segmentation Copying RangeViT Pre-training Fine-tuning ViT Encoder Image classification Image segmentation Self-supervised learning Stem Decoder Figure 1. Exploiting vision transformer (ViT) architectures and weights for LiDAR point cloud semantic segmentation. We leverage the flexibility of transformer-based architectures to re-purpose them with minimal changes for processing sparse point clouds in autonomous driving tasks. The common ViT backbone across modalities allows to effectively transfer weights pre-trained on large image repositories towards improving point cloud seg-mentation performance with fine-tuning. dependently of the lighting condition, providing useful in-formation to build safe and reliable vehicles. A common approach to segment large scale LiDAR point clouds is to project the points on a 2D surface and then to use regular CNNs, originally designed for images, to process the pro-jected point clouds [1, 11, 26, 36, 60, 66]. Recently, Vision Transformers (ViTs) were introduced as an alternative to convolutional neural networks for processing images [14]: images are divided into patches which are linearly embed-ded into a high-dimensional space to create a sequence of visual tokens; these tokens are then consumed by a pure transformer architecture [51] to output deep visual repre-sentations of each token. Despite the absence of almost any domain-specific inductive bias apart from the image to-kenization process, ViTs have a strong representation learn-ing capacity [14] and achieve excellent results on various This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5240 image perception tasks, such as image classification [14], object detection [8] or semantic segmentation [45]. Inspired by this success of ViTs for image understand-ing, we propose to implement projection-based LiDAR se-mantic segmentation with a pure vision transformer archi-tecture at its core. Our goals are threefold in doing so: (1) Exploit the strong representation learning capacity of vision transformer for LiDAR semantic segmentation; (2) Work towards unifying network architectures used for process-ing LiDAR point clouds or images so that any advance in one domain benefits to both; (3) Show that one can lever-age ViTs pre-trained on large-size natural image datasets for LiDAR point cloud segmentation. The last goal is cru-cial because the downside of having few inductive biases in ViTs is that they underperform when trained from scratch on small or medium-size datasets and that, for now, the only well-performing pre-trained ViTs [9,14,45] publicly avail-able are trained on large collections of images that can be acquired, annotated and stored easier than point clouds. In this context, our main contribution is a ViT-based Li-DAR segmentation approach that compensates ViTs’ lack of inductive biases on our data and that achieves state-of-the-art results among projection-based methods. To the best of our knowledge, although works using ViT architectures on dense indoor point clouds already exists [63, 67], this is the first solution using ViTs for the LiDAR point clouds of autonomous driving datasets, which are significantly sparser and noisier than the dense depth-map-based points clouds found in indoor datasets. Our solution, RangeViT, starts with a classical range projection to obtain a 2D rep-resentation of the point cloud [11, 26, 36, 60]. Then, we extract patch-based visual tokens from this 2D map and feed them to a plain ViT encoder [14] to get deep patch representations. These representations are decoded using a lightweight network to obtain pixel-wise label predictions, which are projected back to the 3D point cloud. Our finding is that this ViT architecture needs three key ingredients to reach its peak performance. First, we lever-age ViT models pre-trained on large natural image datasets for LiDAR segmentation and demonstrate that our method benefits from them despite the fact that natural images dis-play little resemblance with range-projection images. Sec-ond, we further compensate for ViTs’ lack of inductive bias by substituting the classical linear embedding layer with a multi-layer convolutional stem. Finally, we refine pixel-wise predictions with a convolutional decoder and a skip connection from the convolutional stem to combine low-level but fine-grain features of the convolutional stem with the high-level but coarse predictions of the ViT encoder. In summary, our contributions are the following: (1) To the best of our knowledge, we are the first to exploit the strong representation learning capacity of vision trans-formers architectures for 3D semantic segmentation fromLiDAR point clouds. By revisiting, in the context of our problem, the tokenization process of the ViT’s encoder and adding a light-weight convolutional decoder for refining the coarse patch-wise ViT representations, we derive a sim-ple but effective projection-based LiDAR segmentation ap-proach, which we call RangeViT. (2)Furthermore, as shown in Fig. 1, the proposed approach allows one to harness ViT models pre-trained on the RGB image domain for the LiDAR segmentation problem. Indeed, despite the large gap between the two domains, we empirically demonstrate that using such pre-training strategies improves segmenta-tion performance. (3)Finally, our RangeViT approach, de-spite its simplicity, achieves state-of-the-art results among project-based segmentation methods.
Azinovic_High-Res_Facial_Appearance_Capture_From_Polarized_Smartphone_Images_CVPR_2023
Abstract We propose a novel method for high-quality facial tex-ture reconstruction from RGB images using a novel captur-ing routine based on a single smartphone which we equip with an inexpensive polarization foil. Specifically, we turn the flashlight into a polarized light source and add a polar-ization filter on top of the camera. Leveraging this setup, we capture the face of a subject with cross-polarized and parallel-polarized light. For each subject, we record two short sequences in a dark environment under flash illu-mination with different light polarization using the modi-fied smartphone. Based on these observations, we recon-struct an explicit surface mesh of the face using structure from motion. We then exploit the camera and light co-location within a differentiable renderer to optimize the fa-cial textures using an analysis-by-synthesis approach. Our All data has been captured at the Technical University of Munich.method optimizes for high-resolution normal textures, dif-fuse albedo, and specular albedo using a coarse-to-fine op-timization scheme. We show that the optimized textures can be used in a standard rendering pipeline to synthesize high-quality photo-realistic 3D digital humans in novel environ-ments.
1. Introduction In recent years, we have seen tremendous advances in the development of virtual and mixed reality devices. At the same time, the commercial availability of such hardware has led to a massive interest in the creation of ’digital hu-man’ assets and photo-realistic renderings of human faces. In particular, the democratization to commodity hardware would open up significant potential for asset creation in video games, other home entertainment applications, or im-mersive teleconferencing systems. However, rendering a This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16836 human face realistically in a virtual environment from ar-bitrary viewpoints with changing lighting conditions is an extremely difficult problem. It involves an accurate recon-struction of the face geometry and skin textures, such as the diffuse albedo, specular gain, or skin roughness. Tra-ditionally, this problem has been approached by recording data in expensive and carefully calibrated light stage cap-ture setups, under expert supervision. We seek to simplify this capture process to allow individuals to reconstruct their own faces, while keeping the quality degradation compared to a light stage to a minimum. The disentanglement of geometry and material of human faces is an extremely ill-posed problem. Current solutions involve a capture setup with multiple cameras and light sources, with millimeter-accurate calibration. A common approach to disentangling face skin surface from subsurface response is the use of polarization filters [9] in tandem with such expensive capture setups. Given such a carefully cali-brated capture setting, one can use differentiable rendering to estimate the individual skin parameters in an analysis-by-synthesis approach. While these methods do produce visually impressive results, they are limited to high-budget production studios. In this paper, we propose a capture setup consisting of only a smartphone and inexpensive polarization foils, which can be attached to the camera lens and flashlight. Inspired by light stage capture setups, a user captures two sequences of their face, one with perpendicular filter alignment, and one with parallel alignment. This allows for a two-stage op-timization, where we first reconstruct a high-resolution dif-fuse albedo texture of a user’s face from the cross-polarized capture, followed by recovery of the specular albedo, nor-mal map, and roughness from the parallel-polarized views. Data is captured in a dark room to avoid requiring pre-computation of an environment map. In addition to visually compelling novel view synthesis and relighting results, our method produces editable textures and face geometry. In summary, the key contributions of our project are: • We propose a commodity capture setup that combines a smartphone’s camera and flashlight with polarization foils. The polarization allows us to separate diffuse from specular parts, and to reconstruct the user’s face textures, such as diffuse albedo, specular albedo and normal maps. • Our proposed capture setting with the co-located cam-era and light enables separation of skin properties from illumination, which is of key importance for realistic rendering of faces. • We propose a coarse-to-fine optimization strategy with mip-mapping, which increases sharpness of the recon-structed appearance textures.2. Related Work High-fidelity face appearance capture and reconstruction has received significant attention in the entertainment in-dustry for creating digital humans and more recently in the AR/VR community for generating realistic avatars. In our context, facial appearance reconstruction means recovering a set of high-resolution albedo, specular (gain and rough-ness) and normal maps. Over the years, physically-based skin scattering models have become ever more sophisticated [6, 26, 53]; however, their input texture quality remains the single most important factor to photo-realism. Polarization. For some time, polarization has been used to separate specular from diffuse [35,39,51]. These techniques rely on the fact that single bounce specular reflection does not alter the polarization state of incoming light. Riviere et al. [41] propose an approach to reconstruct reflectance in uncontrolled lighting, using the inherent polarization of natural illumination. Nogue et al. [38] recover SVBRDF maps of planar objects with near-field display illumination, exploiting Brewster angle properties. Deschaintre et al. [10] use polarization to estimate the shape and SVBRDF of an object with normal, diffuse, specular, roughness and depth maps from a single view. Dave et al. [8] propose a similar approach for multi-view data. In MoRF [48], a studio setup with polarization is used to reconstruct relightable neural radiance fields of a face. Lightstage capture systems. In their foundational work, Debevec et al. [9] introduced the Lightstage system to cap-ture human face reflectance using a dome equipped with controlled lights, separating the diffuse from the specular component using polarization filters. Follow-up work re-constructs high-resolution normal maps using photometric stereo [52], compensates for motion during the capture [50] and expands the captured area [19]. The proposed capture studios didn’t come without lim-itations, as the lighting environment needed to be tightly controlled, the lighting patterns involved took a relatively long time, and the polarization filters were challenging to set up for multiple cameras and lights. Fyffe et al. [14–17] proposed the use of color gradients and spectral multiplex-ing to reduce capture time. With the objective of designing a more practical system, Kampouris et al. [24] demonstrate that binary gradients are sufficient for separating diffuse from specular without polarization. Lattas et al. [28] use an array of monitors or tablets for a practical binary gradients capture studio. In line with this thread of research, Gotardo et al. [20] present a multi-view setup for dynamic facial tex-ture acquisition without the need for polarized illumination. Riviere et al. [40] build a similar lightweight system reintro-ducing polarization without active illumination, and model-ing subsurface scattering. This effort was refined to include global illumination and polarization modeling [54]. The 16837 Figure 2. Our optimization has three steps: In step 0, we capture data with a handheld smartphone which is equipped with polarization foils (on the camera, as well as on the flashlight; see Figure 3). We reconstruct the facial geometry and estimate camera poses based on all captured images using structure-from-motion and multi-view stereo. To ensure consistent texture parameterization across different subjects, we non-rigidly fit a FLAME mesh to the scan. In a subsequent photometric optimization step (step 1), we estimate a high-resolution diffuse texture of the skin from the cross-polarized data, as well as an initial normal map. The reconstructed geometry, diffuse and normal map are used as input for step 2 of the optimization. Using the parallel-polarized sequence, we estimate the specular gain and final normal map in a second photometric optimization. In addition, a global skin roughness value is optimized in this step. proposed solutions deliver impressive visual results, but re-quire exp
Guo_Class_Attention_Transfer_Based_Knowledge_Distillation_CVPR_2023
Abstract Previous knowledge distillation methods have shown their impressive performance on model compression tasks, however, it is hard to explain how the knowledge they trans-ferred helps to improve the performance of the student net-work. In this work, we focus on proposing a knowledge distillation method that has both high interpretability and competitive performance. We first revisit the structure of mainstream CNN models and reveal that possessing the capacity of identifying class discriminative regions of in-put is critical for CNN to perform classification. Further-more, we demonstrate that this capacity can be obtained and enhanced by transferring class activation maps. Based on our findings, we propose class attention transfer based knowledge distillation (CAT-KD). Different from previous KD methods, we explore and present several properties of the knowledge transferred by our method, which not only improve the interpretability of CAT-KD but also contribute to a better understanding of CNN. While having high inter-pretability, CAT-KD achieves state-of-the-art performance on multiple benchmarks. Code is available at: https: //github.com/GzyAftermath/CAT-KD .
1. Introduction Knowledge distillation (KD) transfers knowledge dis-tilled from the bigger teacher network to the smaller student network, aiming to improve the performance of the student network. Depending on the type of the transferred knowl-edge, previous KD methods can be divided into three cat-egories: based on transferring logits [3, 6, 11, 16, 33], fea-tures [2, 10, 17–19, 23, 24, 28], and attention [29]. Although KD methods that are based on transferring logits and fea-tures have shown their promising performance [2, 33], it is hard to explain how the knowledge they transferred helps to improve the performance of the student network, due to the uninterpretability of logits and features. Relatively, the principle of attention-based KD methods is more intuitive: *Corresponding author 12. . ....K𝜔𝑛𝑘 𝜔𝑛−1𝑘 𝜔2𝑘 𝜔1𝑘1X1ConvConvert CAMsGAP 12kGAP 1... 2k ... FC𝜔1𝑘𝜔2𝑘𝜔𝑛−1𝑘𝜔𝑛𝑘 . . . Normal CNN Converted structure...Figure 1. Illustration of the converted structure. After converting the FC layer into a convolutional layer with 1 ×1 kernel and mov-ing the position of the global average pooling layer, CAMs can be obtained during the forward propagation. it aims at telling the student network which part of the input should it focus on during the classification, which is real-ized by forcing the student network to mimic the transferred attention maps during training. However, though previous work AT [29] has validated the effectiveness of transferring attention, it does not present what role attention plays dur-ing the classification. This makes it hard to explain why telling the trained model where should it focus could im-prove its performance on the classification mission. Be-sides, the performance of the previous attention-based KD method [29] is less competitive compared with the methods that are based on transferring logits and features [2, 33]. In this work, we focus on proposing an attention-based KD method that has higher interpretability and better perfor-mance. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11868 Figure 2. Visualization of CAMs corresponding to categories with Top 4 prediction scores for the given image. The predicted categories and their scores are reported in the picture. We start our work by exploring what role attention plays during classification. After revisiting the structure of the mainstream models, we find that with a little conversion (il-lustrated in Figure 1), class activation map (CAM) [34], a kind of class attention map which indicates the discrimina-tive regions of input for a specific category, can be obtained during the classification. Without changing the parame-ters and outputs, the classification process of the converted model can be viewed in two steps: (1) the model exploits its capacity to identify class discriminative regions of input and generate CAM for each category contained in the classifica-tion mission, (2) the model outputs the prediction score of each category by computing the average activation of the corresponding CAM. Considering that the converted model makes predictions by simply comparing the average activa-tion of CAMs, possessing the capacity to identify class dis-criminative regions of input is critical for CNN to perform classification. The question is: can we enhance this capac-ity by offering hints about class discriminative regions of input during training? To answer this question, we propose class attention transfer (CAT). During CAT, the trained model is not required to predict the category of input, it is only forced to mimic the trans-ferred CAMs, which are normalized to ensure they only contain hints about class discriminative regions of input. Through experiments with CAT, we reveal that transferring only CAMs can train a model with high accuracy on the classification task, reflecting the trained model obtains the capacity to identify class discriminative regions of input. Besides, the performance of the trained model is influenced by the accuracy of the model offering the transferred CAMs. This further demonstrates that the capacity of identifying class discriminative regions can be enhanced by transfer-ring more precise CAMs. Based on our findings, we propose class attention trans-fer based knowledge distillation (CAT-KD), aiming to en-able the student network to achieve better performance by improving its capacity of identifying class discriminative regions. Different from previous KD methods transferring dark knowledge , we present why transferring CAMs to the trained model can improve its performance on the classifi-cation task. Moreover, through experiments with CAT, we reveal several interesting properties of transferring CAMs,which not only help to improve the performance and in-terpretability of CAT-KD but also contribute to a better understanding of CNN. While having high interpretability, CAT-KD achieves state-of-the-art performance on multiple benchmarks. Overall, the main contributions of our work are shown below: • We propose class attention transfer and use it to demonstrate that the capacity of identifying class dis-criminative regions of input, which is critical for CNN to perform classification, can be obtained and en-hanced by transferring CAMs. • We present several interesting properties of transfer-ring CAMs, which contribute to a better understanding of CNN. • We apply CAT to knowledge distillation and name it CAT-KD. While having high Interpretability, CAT-KD achieves state-of-the-art performance on multiple benchmarks.
Chowdhury_What_Can_Human_Sketches_Do_for_Object_Detection_CVPR_2023
Abstract Sketches are highly expressive, inherently capturing sub-jective and fine-grained visual cues. The exploration of such innate properties of human sketches has, however, been lim-ited to that of image retrieval. In this paper, for the first time, we cultivate the expressiveness of sketches but for the fundamental vision task of object detection. The end result is a sketch-enabled object detection framework that detects based on what you sketch – that “zebra” (e.g., one that is eating the grass) in a herd of zebras (instance-aware de-tection), and only the part (e.g., “head” of a “zebra”) that you desire (part-aware detection). We further dictate that our model works without (i) knowing which category to ex-pect at testing (zero-shot) and (ii) not requiring additional bounding boxes (as per fully supervised) and class labels (as per weakly supervised). Instead of devising a model from the ground up, we show an intuitive synergy between foundation models (e.g., CLIP) and existing sketch models build for sketch-based image retrieval (SBIR), which can al-ready elegantly solve the task – CLIP to provide model gen-eralisation, and SBIR to bridge the (sketch →photo) gap. In particular, we first perform independent prompting on both sketch and photo branches of an SBIR model to build highly generalisable sketch and photo encoders on the back of the generalisation ability of CLIP . We then devise a train-ing paradigm to adapt the learned encoders for object de-tection, such that the region embeddings of detected boxes are aligned with the sketch and photo embeddings from SBIR. Evaluating our framework on standard object de-tection datasets like PASCAL-VOC and MS-COCO outper-forms both supervised (SOD) and weakly-supervised ob-ject detectors (WSOD) on zero-shot setups. Project Page: https://pinakinathc.github.io/sketch-detect
1. Introduction Sketches have been used from prehistoric times for hu-mans to express and record ideas [35, 76]. The level of ex-pressiveness [28, 41] they carry remains unparalleled today even in the face of language [14, 82] – recall that moment that you want to resort to pen and paper (or Zoom White-Object Detector CLIP Sketch Query SketchCLIP Sketch Triplet Loss CLIP PhotoObject DetectorCLIP Sketch(a) (b)(c) Photo PromptSketch Prompt Sketch PromptSketch Promptinstance aware part aware Figure 1. We train an object detector using SBIR models. (a) First, we train an FG-SBIR model using existing sketch–photo pairs that generalise to unseen categories. (b) To train the object detector module, we tile multiple object-level photos from SBIR datasets [75] and use its paired sketch encoding via a pre-trained sketch encoder to align the region embedding of detected boxes. (c) Inclusion of sketches for object detection opens several avenues like detecting a specific object for query sketch (e.g., detect a “ze-bra” eating grass) or part of an object (e.g., “head” of “zebra”). board) to sketch down an idea? Sketch research has also flourished over the past decade [16, 70, 90, 99], with a whole spectrum of works on tradi-tional tasks such as classification [30] and synthesis [15, 31, 54], and those more sketch-specific such as modelling visual abstraction [1, 59], style transfer [74] and continu-ous stroke fitting [17], to cute applications such as turning a sketch into a photo classifier [5, 37]. The expressiveness of sketches, however, has been only explored in the form of sketch-based image retrieval (SBIR) [19,70,94], especially the fine-grained [2,6,9] variant (FG-SBIR). Great strides have been made, with recent systems already reaching maturity for commercial adaptation [6] – a great testimony to how cultivating sketch expressiveness can make a real impact. In this paper, we ask the question – what can human sketches do for the fundamental vision tasks of object detec-tion? The envisaged outcome is, therefore, a sketch-enabled object detection framework that detects based on what you sketch, i.e., how youwant to express yourself. Sketching a “zebra eating the grass” (in Fig. 1) should detect “that” This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15083 zebra from a herd of zebras (instance-aware detection), and it will also give you the freedom to be specific with parts (part-aware detection), so if the “head” of a “zebra” is what you would rather desire, then just sketch the very head. Instead of devising a sketch-enabled object detection model from the ground up, we show that an intuitive syn-ergy between foundation models (e.g., CLIP [64]) and off-the-shelf SBIR models [8,100] can already, rather elegantly, solve the problem – CLIP to provide model generalization, and SBIR to bridge the (sketch →photo) gap. In partic-ular, we adapt CLIP to build sketch and photo encoders (branches in a common SBIR model) by learning indepen-dent prompt vectors [42, 71] separately for both modalities. More specifically, during training, the learnable prompt vectors are prepended into the input sequence of the first transformer layer of CLIP’s ViT backbone [22] while keep-ing the rest frozen. As such, we inject model generalization into the learned sketch and photo distributions. Next, we devise a training paradigm to adapt the learned encoders for object detection, such that the region embeddings of detected boxes are aligned with the sketch and photo em-beddings from SBIR. This allows our object detector to train without requiring additional training photos (bounding boxes and class labels) from auxiliary datasets. To make our sketch-based detector more interesting (general-purpose [13, 57]), we further dictate it also works in a zero-shot manner. For that, following [10], we ex-tend object detection from a pre-defined fixed-set setup to an open-vocab setup. Specifically, we replace the clas-sification heads in object detectors with prototype learn-ing [49], where the encoded query sketch features act as the support set (or prototypes). Next, the model is trained under the weakly supervised object detection (WSOD) set-ting [10,78], using a multi-category cross-entropy loss over the prototypes of all possible categories or instances. How-ever, while SBIR is trained using object-level (single object) sketch/photo pairs, object detection works on image-level (multiple categories). Hence, to train object detectors using SBIR, we also need to bridge the gap between object and image-level features. Towards this, we use a data augmenta-tion trick that is embarrassingly simple yet highly effective for robustness towards corruption and generalisation to out-of-vocab [101, 102] – we randomly select n={1, . . . , 7} photos from SBIR datasets [30,75] and arbitrarily tile them on a blank canvas (similar to CutMix [101]). In summary, our contributions are (i) for the first time cultivating the expressiveness of human sketches for object detection, (ii) sketch-based object detector that detects what you intend to express in your sketch, (iii) an object detec-tor that is both instance-aware and part-aware, in addition to performing conventional category-level detection. (iv) a novel prompt learning setup to marry CLIP and SBIR to build the sketch-aware detector that works without needingbounding box annotations (as supervised [67]), class labels (as weakly supervised [10]), and in a zero-shot manner. (v) results outperform both supervised (SOD) and weakly su-pervised object detectors (WSOD) on zero-shot setup.
Ci_UniHCP_A_Unified_Model_for_Human-Centric_Perceptions_CVPR_2023
Abstract Human-centric perceptions (e.g., pose estimation, hu-man parsing, pedestrian detection, person re-identification, etc.) play a key role in industrial applications of visual mod-els. While specific human-centric tasks have their own rel-evant semantic aspect to focus on, they also share the same underlying semantic structure of the human body. However, few works have attempted to exploit such homogeneity and design a general-propose model for human-centric tasks. In this work, we revisit a broad range of human-centric tasks and unify them in a minimalist manner. We propose UniHCP , a Unified Model for Human-Centric Perceptions, which unifies a wide range of human-centric tasks in a sim-plified end-to-end manner with the plain vision transformer architecture. With large-scale joint training on 33 human-centric datasets, UniHCP can outperform strong baselines on several in-domain and downstream tasks by direct eval-uation. When adapted to a specific task, UniHCP achieves new SOTAs on a wide range of human-centric tasks, e.g., 69.8 mIoU on CIHP for human parsing, 86.18 mA on PA-100K for attribute prediction, 90.3 mAP on Market1501 for ReID, and 85.8 JI on CrowdHuman for pedestrian detec-tion, performing better than specialized models tailored for each task. The code and pretrained model are available at https://github.com/OpenGVLab/UniHCP.
1. Introduction Research on human-centric perceptions has come a long way with tremendous advancements in recent years. Many methods have been developed to enhance the performance of pose estimation [9, 25, 60, 91], pedestrian detection [4, 62, 63, 76], person re-identification [42, 86, 101] (ReID), and many other human-centered tasks. These significant progress play a key role in advancing the applications of vi-*Equal contribution. †Corresponding author. AttributeDetectionParsingKeypointsReID … age 17-30 jeansUniHCP Figure 1. UniHCP unifies 5 human-centric tasks under one model and is trained on a massive collection of human-centric datasets. sual models in numerous fields, such as sports analysis [11], autonomous driving [97], and electronic retailing [27]. Although different human-centric perception tasks have their own relevant semantic information to focus on, those semantics all rely on the same basic structure of the human body and the attributes of each body part [64, 81]. In light of this, there have been some attempts trying to exploit such homogeneity and train a shared neural network jointly with distinct human-centric tasks [28,29,46,48,61,71,77,87,98]. For instance, human parsing has been trained in conjunc-tion with human keypoint detection [46, 61, 98], pedestrian attribute recognition [87], pedestrian detection [48] or per-son re-identification [28]. The experimental results of these works empirically validate that some human-centric tasks may benefit each other when trained together. Motivated by these works, a natural expectation is that a more ver-satile all-in-one model could be a feasible solution for gen-eral human-centric perceptions, which can utilize the homo-geneity of human-centric tasks for improving performance, enable fast adaption to new tasks, and decrease the burden of memory cost in large-scale multitask system deployment compared with specific models to specific tasks. However, unifying distinct human-centric tasks into a This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17840 general model is challenging considering the data diversity and output structures. From the data’s perspective, images in different human-centric tasks and different datasets have different resolutions and characteristics (e.g., day and night, indoor and outdoor), which calls for a robust representative network with the capability to accommodate them. From the perspective of output, the annotations and expected out-puts of different human-centric tasks have distinct structures and granularities. Although this challenge can be bypassed via deploying separate output heads for each task/dataset, it is not scalable when the number of tasks and datasets is large. In this work, we aim to explore a simple, scalable for-mulation for unified human-centric system and, for the first time, propose a Uni fied model for H uman-C entric Perceptions (UniHCP). As shown in Figure.1, UniHCP uni-fies and simultaneously handles five distinct human-centric tasks, namely, pose estimation, semantic part segmentation, pedestrian detection, ReID, and person attribute recogni-tion. Motivated by the extraordinary capacity and flexibil-ity of the vision transformers [43, 94], a simple yet unified encoder-decoder architecture with the plain vision trans-former is employed to handle the input diversity, which works in a simple feedforward and end-to-end manner, and can be shared across all human-centric tasks and datasets to extract general human-centric knowledge. To gener-ate the output for different tasks with the unified model, UniHCP defines Task-specific Queries, which are shared among all datasets with the same task definition and inter-preted into different output units through a Task-guided In-terpreter shared across different datasets and tasks. With task-specific queries and the versatile interpreter, UniHCP avoids the widely used task-specific output heads, which minimizes task-specific parameters for knowledge sharing and make backbone-encoded features reusable across tasks. Own to these designs, UniHCP is suitable and easy to perform multitask pretraining at scale. To this end, we pre-trained an UniHCP model on a massive collection of 33 labeled human-centric datasets. By harnessing the abun-dant supervision signals of each task, we show such a model can simultaneously handle these in-pretrain tasks well with competitive performance compared to strong baselines rely-ing on specialized architectures. When adapted to a specific task, both in-domain and downstream, our model achieves new SOTAs on several human-centric task benchmarks. In summary, the proposed model has the following properties: ■Unifying five distinct human-centric tasks and han-dling them simultaneously. ■Shared encoder-decoder network based on plain trans-former. ■Simple task-specific queries identifying the outputs. ■Maximum weight sharing (99.97% shared parameters) with a task-guided interpreter.■Trainable at scale and demonstrates competitive per-formance compared to task-specialized models.
Chen_VoxelNeXt_Fully_Sparse_VoxelNet_for_3D_Object_Detection_and_Tracking_CVPR_2023
Abstract 3D object detectors usually rely on hand-crafted prox-ies,e.g., anchors or centers, and translate well-studied 2D frameworks to 3D. Thus, sparse voxel features need to be densified and processed by dense prediction heads, which inevitably costs extra computation. In this paper, we in-stead propose VoxelNext for fully sparse 3D object detec-tion. Our core insight is to predict objects directly based on sparse voxel features, without relying on hand-crafted proxies. Our strong sparse convolutional network Vox-elNeXt detects and tracks 3D objects through voxel fea-tures entirely. It is an elegant and efficient framework, with no need for sparse-to-dense conversion or NMS post-processing. Our method achieves a better speed-accuracy trade-off than other mainframe detectors on the nuScenes dataset. For the first time, we show that a fully sparse voxel-based representation works decently for LIDAR 3D object detection and tracking. Extensive experiments on nuScenes, Waymo, and Argoverse2 benchmarks validate the effective-ness of our approach. Without bells and whistles, our model outperforms all existing LIDAR methods on the nuScenes tracking test benchmark. Code and models are available at github.com/dvlab-research/VoxelNeXt.
1. Introduction 3D perception is a fundamental component in au-tonomous driving systems. 3D detection networks take sparse point clouds or voxels as input, and localize and cat-egorize 3D objects. Recent 3D object detectors [40, 49, 57] usually apply sparse convolutional networks (Sparse CNNs) [53] for feature extraction owing to its efficiency. Inspired by 2D object detection frameworks [14, 38], an-chors [12, 53] or centers [57], i.e., dense point anchors in CenterPoint [57], are commonly utilized for prediction. Both of them are hand-crafted and taken as intermediate proxies for 3D objects. Anchors and centers are designed for regular and grid-structured image data in the first place, and do not consider sparsity and irregularity of 3D data. To employ these proxy representations, the main stream of detectors [12, 40, 57] CenterPoint Input V oxelNeXt01Figure 1. Visualization of input and heatmaps of CenterPoint in BEV for Car. Most values in the heatmaps are nearly zero, while the dense head computes over all BEV features, which is wasteful. convert 3D sparse features to 2D dense features, so as to build a dense detection head for the ordered anchors or centers. Albeit useful, this dense head tradition leads to other limitations, including inefficiency and complicated pipelines , as explained below. In Fig. 1, we visualize the heatmap in CenterPoint [57]. It is clear that a large portion of space has nearly zero pre-diction scores. Due to inherent sparsity and many back-ground points, only a small number of points have re-sponses, i.e., less than 1% for Car class on average of nuScenes validation set. However, the dense prediction head computes over all positions in the feature map, as re-quired by the dense convolution computation. They not only waste much computation, but also complicate detec-tion pipelines with redundant predictions. It requires to use non-maximum suppression (NMS) like post-processing to remove duplicate detections, preventing the detector from being elegant. These limitations motivate us to seek alter-native sparse detection solutions. In this paper, we instead propose VoxelNeXt . It is a simple, efficient, and post-processing-free 3D object detec-tor. The core of our design is a voxel-to-object scheme, which directly predicts 3D objects from voxel features, with a strong fully sparse convolutional network. The key ad-vantage is that our approach can get rid of anchor proxies, sparse-to-dense conversion, region proposal networks, and This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21674 OutputInputRoIPooingAnchors/CentersdenseheadDensefeaturesInputFC / SpConvM+NMSMainstream 3D DetectorsVoxelNeXt MSparse Max-poolV oxelfeatures Sparse CNN Output Figure 2. Pipelines of mainstream 3D object detectors and V oxelNeXt. These 3D detectors [12,40,57] rely on sparse-to-dense conversion, anchors/centers, and dense heads with NMS. RoI pooling is an option for two-stage detectors [12, 40]. In contrast, V oxelNeXt is a fully sparse convolutional network, which predicts results directly upon voxel features, with either fully connected layers or sparse convolutions. other complicate components. We illustrates the pipelines of mainstream 3D detectors and ours in Fig. 2. High inference efficiency is due to our voxel-to-object scheme avoiding dense feature maps. It predicts only upon sparse and necessary locations, as listed in Tab. 1 with com-parison to CenterPoint [57]. This representation also makes VoxelNeXt easily extended to 3D tracking with an offline tracker. Previous work [57] only tracks for the predicted object centers, which might involve prediction bias to its positions. In V oxelNeXt, the query voxels ,i.e., the voxels for box prediction, can also be tracked for association. Recently, FSD [16] exploits the fully sparse framework. Motivated by V oteNet [36], it votes for object centers and resorts to iterative refinement. Since 3D sparse data is gen-erally scattered on object surfaces, this voting process in-evitably introduces bias or error. Consequently, refinement, such as iterative group correction, is needed to ensure final accuracy. The system is complicated by its heavy belief in object centers. FSD [16] is promising at the large-range Ar-goverse2, while its efficiency is inferior to ours, as in Fig. 3. To demonstrate the effectiveness of V oxelNeXt, we evaluate our models on three large-scale benchmarks of nuScenes [3], Waymo [44], Argoverse2 [52] datasets. V ox-elNeXt achieves leading performance with high efficiency on 3D object detection on both these benchmarks. It also yields state-of-the-art performance on 3D tracking. With-out bells and whistles, it ranks 1stamong all LIDAR-only entries on the nuScenes tracking test split [3].
Guo_Zero-Shot_Generative_Model_Adaptation_via_Image-Specific_Prompt_Learning_CVPR_2023
Abstract Recently, CLIP-guided image synthesis has shown ap-pealing performance on adapting a pre-trained source-domain generator to an unseen target domain. It does not require any target-domain samples but only the textual do-main labels. The training is highly efficient, e.g., a few minutes. However, existing methods still have some lim-itations in the quality of generated images and may suf-fer from the mode collapse issue. A key reason is that a fixed adaptation direction is applied for all cross-domain image pairs, which leads to identical supervision signals. To address this issue, we propose an Image-specific Prompt Learning (IPL) method, which learns specific prompt vec-*Equal contribution. †Corresponding authors.tors for each source-domain image. This produces a more precise adaptation direction for every cross-domain image pair, endowing the target-domain generator with greatly enhanced flexibility. Qualitative and quantitative evalua-tions on various domains demonstrate that IPL effectively improves the quality and diversity of synthesized images and alleviates the mode collapse. Moreover, IPL is inde-pendent of the structure of the generative model, such as generative adversarial networks or diffusion models. Code is available at https://github.com/Picsart-AI-Research/IPL-Zero-Shot-Generative-Model-Adaptation.
1. Introduction In recent years, image synthesis using generative adver-sarial networks (GANs) [11] has been rapidly developed. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11494 The state-of-the-art methods can generate images that are hard to be distinguished from real data [14, 20, 21, 46, 50]. However, the GAN-based methods heavily rely on vast quantities of training examples, and adopt a cumbersome adversarial training scheme which generally costs many hours of training time. Unfortunately, in many real-world scenarios, data acquisition is difficult or expensive. For ex-ample, in the artistic domains, it is impossible to have artists make thousands of creations. The high training cost is also unacceptable on some embedded devices, e.g., cellphones. To address these issues, researchers begin to focus on the generative model adaptation. The goal of this task is to adapt a pre-trained source-domain generator to a target do-main with limited data. Many few-shot GAN-based meth-ods are proposed, such as TGAN [48], FreezeD [30], Min-GAN [47], ADA [18], DiffAug [53], IDC [33] and RSSA [49], etc. However, these methods still require some train-ing images of the target domain and follow the adversarial training scheme. As a pioneer work, StyleGAN-NADA [8] (NADA for short) proposes a zero-shot adaptation method, which only requires textual domain labels and discards the cumbersome adversarial training scheme by introducing a pre-trained CLIP model. Although efficient, it still has obvi-ous deficiencies, i.e., the limited quality and mode collapse of generated images. As shown in Fig.1, we adapt a pre-trained generator of “Photo” domain to “Disney”, “Anime painting”, “Wall painting” and “Ukiyo-e” domains. For the results of NADA [8], we notice that the generated im-ages of the same target domain always show some homo-geneous patterns which degrade the image quality and di-versity, such as deep nasolabial folds in “Disney”, squinting eyes in “Anime painting”, red cheeks in “Wall painting” and blue eyebrows in “Ukiyo-e” (yellow box areas). By exploring the factors behind this phenomenon, we find that the key factor is the fixed adaptation direction pro-duced by manually designed prompts. Sharing the direction for all cross-domain image pairs leads to identical supervi-sion signals for the model adaptation. Consider the exam-ple, adapting a generator of “Human” domain to “Tolkien elf” domain as shown in Fig.2. The previous works [8, 22] adopt manually designed prompts (e.g., “A photo of a”) plus the domain label to produce a fixed adaptation direction, which is shared by all cross-domain image pairs (Fig.2 (a)) in the adaptation process. We argue that the constraint is too restrictive and suppresses the image-specific features, leading to homogeneous generated patterns. In this paper, we propose an Image-specific Prompt Learning (IPL) method to address the above issue. The mo-tivation is setting more precise and diversified adaptation directions by customizing more image-specific prompts, for instance “Asian girl”, “Curly hair lady” and “Elder glass man” (Fig.2 (b)). These adaptation directions endow the target-domain generator with high flexibility to synthesize (a)Manual prompts:ashared fixed reference direction(b) Learnable prompts:image-specific reference directionsA photo of a humanCurlyhair ladyHumanElder glassmanhumanAsian girlhuman Elder glassmanTolkien elfAsian girlTolkien elfCurlyhair ladyTolkien elfTarget domainTolkien elfSource domainHumanTarget domainTolkien elfSource domainHumanA photo of aTolkien elf //Source/targetimageembeddingSource/targettextembeddingImageadaptationdirectionTextadaptationdirectionFigure 2. An illustration of our motivation. The previous meth-ods adopt manual prompts to compute a fixed adaptation direction for all cross-domain image pairs, while our method learns image-specific prompts for producing more precise and diversified adap-tation directions. more diversified images. The proposed IPL is a two-stage method. In Stage 1, a latent mapper is trained to produce an image-specific set of prompt vectors conditioned on each source-domain image by a contrastive training scheme. The learned prompt vectors contain more specific and diversi-fied features of the source-domain images than the fixed prompt vectors. We further propose a domain regularization loss to ensure that the learned prompt vectors are compat-ible with the target domain. In Stage 2, we compute more precise anddiversified adaptation directions for each cross-domain image pair, and train the target-domain generator with an adaptive directional CLIP loss, which can be viewed as an improved version of the Directional CLIP Loss [8]. As shown in Fig.1, our method alleviates the mode collapse issue well. Extensive experiments across a wide range of domains demonstrate that the proposed IPL effectively im-proves the quality of synthesized images and overcomes the mode collapse issue. User studies and ablation studies are also conducted to validate the effectiveness of our method. It is worth noting that our proposed IPL method is inde-pendent of the structure of the generative model, and can be applied to the recent diffusion models [13,27,31,35,41– 43, 51]. Thus we also combine IPL with diffusion mod-els and get a more robust and stronger generative capacity, especially on complex images, which shows the high effec-tiveness and adaptability of our approach.
Iofinova_Bias_in_Pruned_Vision_Models_In-Depth_Analysis_and_Countermeasures_CVPR_2023
Abstract Pruning—that is, setting a significant subset of the pa-rameters of a neural network to zero—is one of the most popular methods of model compression. Yet, several recent works have raised the issue that pruning may induce or ex-acerbate biasin the output of the compressed model. De-spite existing evidence for this phenomenon, the relation-ship between neural network pruning and induced bias is not well-understood. In this work, we systematically inves-tigate and characterize this phenomenon in Convolutional Neural Networks for computer vision. First, we show that it is in fact possible to obtain highly-sparse models, e.g. with less than 10% remaining weights, which do not de-crease in accuracy nor substantially increase in bias when compared to dense models. At the same time, we also find that, at higher sparsities, pruned models exhibit higher uncertainty in their outputs, as well as increased correla-tions, which we directly link to increased bias. We pro-pose easy-to-use criteria which, based only on the uncom-pressed model , establish whether bias will increase with pruning, and identify the samples most susceptible to bi-ased predictions post-compression. Our code can be found athttps://github.com/IST-DASLab/pruned-vision-model-bias .
1. Introduction The concept of “bias” in machine learning models spans a range of considerations in terms of statistical, perfor-mance, and social metrics. Different definitions can lead to different relationships between bias and accuracy. For instance, if bias is defined in terms of accuracy disparity between identity groups, then accuracy in the “stronger” group may have to be reduced in order to reduce model bias. Several sources of bias have been identified in this context. For example, bias in datasets commonly used to train machine learning models [ 4,5,53] can severely impact outputs, and may be difficult or even impossible to correct during training. The choice of model architecture, training methods, evaluation, and deployment can create or exacer-bate bias [ 2,42,43]. One potential source of bias which is relatively less in-vestigated is the fact that machine learning models, and in particular deep neural networks, are often compressed for efficiency before being deployed. Seminal work by Hooker et al. [ 29] and its follow-ups, e.g. [ 28,38] pro-vided examples where model compression, and in particular pruning, can exacerbate bias by leading models to perform poorly on “unusual” data, which can frequently coincide with marginalized groups. Given the recent popularity of compression methods in deployment settings [ 13,18,19,27] and the fact that, for massive models, compression is often necessary to enable model deployment, these findings raise the question of whether the bias due to compression can be exactly characterized, and in particular whether bias is an inherent side-effect of the model compression process. In this paper, we perform an in-depth analysis of bias in compressed vision models, providing new insights on this phenomenon, as well as a set of practical, effective crite-ria for identifying samples susceptible to biased predictions, which can be used to significantly attenuate bias. Our work starts from a common setting to study bias and bias mitigation [ 28,29,40,50]: we study properties of sparse residual convolutional neural networks [ 25], in par-ticular ResNet18, applied for classification on the CelebA dataset [ 41]. Then, we validate our findings across other CNN architectures and other datasets. To study the impact of sparsity, we train highly accurate models with sparsity ranging from 80% to 99.5%, using the standard gradual magnitude pruning (GMP) approach [ 18,21,22,55]. We consider bias in dense and sparse models from two perspec-tives: systematic bias , which refers to consistent errors in the model output, and category bias , which refers to viola-tions of fairness metrics associated with protected groups. On the positive side, our analysis shows that the GMP approach can produce models that are highly sparse, i.e. 90-95% of pruned weights, without significant increase in any bias-related metrics. Yet, this requires care: we show that shared, jointly-trained representations are significantly less susceptible to bias, and so careful choices of training pro-cedure are needed for good results. On the other hand, at very high sparsities (95%-99.5%) we do observe non-trivial increase in category bias for the sparse models, for specific protected attributes. We perform an in-depth study of this This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24364 phenomenon, correlating increase in bias with increased un-certainty in the model outputs, induced by sparsity. Lever-aging insights from our analysis, we provide a simple set of criteria and techniques based on threshold calibration and overriding decisions for sensitive samples, which we show to have a significant effect on bias reduction. The latter only use information found in the original dense model.
He_Compositor_Bottom-Up_Clustering_and_Compositing_for_Robust_Part_and_Object_CVPR_2023
Abstract In this work, we present a robust approach for joint part and object segmentation. Specifically, we reformulate ob-ject and part segmentation as an optimization problem and build a hierarchical feature representation including pixel, part, and object-level embeddings to solve it in a bottom-up clustering manner. Pixels are grouped into several clus-ters where the part-level embeddings serve as cluster cen-ters. Afterwards, object masks are obtained by compositing the part proposals. This bottom-up interaction is shown to be effective in integrating information from lower seman-tic levels to higher semantic levels. Based on that, our novel approach Compositor produces part and object seg-mentation masks simultaneously while improving the mask quality. Compositor achieves state-of-the-art performance on PartImageNet and Pascal-Part by outperforming previ-ous methods by around 0.9%and1.3%on PartImageNet, 0.4%and1.7%on Pascal-Part in terms of part and object mIoU and demonstrates better robustness against occlusion by around 4.4%and7.1%on part and object respectively.
1. Introduction Detecting objects and parsing them into semantic parts is a fundamental ability of human visual system. When view-ing images, humans not only detect, segment, and classify objects but also segment their semantic parts and identify them. This gives a hierarchical representation that enables a detailed and interpretable understanding of the object which is useful for downstream tasks. For example, humans can estimate the pose of a tiger based on the spatial configura-tion of its parts and hence judge whether it is about to attack or if it is peacefully sleeping. It is conjectured by cognitive psychologists [3, 27] that these hierarchical representations are constructed in a bottom-up manner where humans first perceive parts and then group them together to form objects. By contrast, the computer vision literature on semantic *These authors contributed equally to this work.segmentation mostly concentrates on object-level, neglect-ing intermediate part representations, although object and part segmentation have been shown to be mutually benefi-cial to each other [13, 43]. We emphasize that parts help many other tasks such as pose estimation [11, 46], detec-tion [2, 7], and fine-grained recognition [49]. In addition, exploiting local features or part information can increase robustness of object models against occlusion [1, 25, 40]. Recently, He et al. [20] proposed PartImageNet, where both part and object annotations are provided. Meanwhile, their studies showed that naively using part annotation as deep supervision can improve object segmentation. This motivates us to further design a better interaction pipeline between objects and parts for high-quality segmentation. In this work, we present a strategy for jointly segment-ing parts and objects in a bottom-up process. Specifically, we consider a hierarchical representation of images in terms of pixels, parts, and objects. We learn feature embeddings which enables us to reformulate semantic segmentation as an optimization problem whose goal is to find feature cen-troids that represent parts and objects. As shown in Fig-ure 1, our method uses a bottom-up strategy where pixels are grouped to form part embeddings which, in turn, are grouped to form object embeddings. We implement this in two steps. First, we cluster image pixels to make pro-posals for object parts. Here the feature embeddings are learned so that pixels belonging to the same part have sim-ilar features. Second, we use a similar approach to com-pose these part proposals to segment the whole object which involves selecting some part proposals and rejecting oth-ers. Our complete algorithm, Compositor , for segmenting parts and objects consists of these clustering andcomposit-ingsteps. This novel algorithm not only helps us to build a hierarchical segmentation model but also increases the ro-bustness of the model against occlusion since our parts are clustered based on the similarity of pixel features, which are less affected by occlusion compared to other context-based methods. Moreover, objects are constructed using parts that helps minimize the influence of occlusion. We verify Compositor’s effectiveness on both PartIma-geNet [20] and Pascal-Part [7], where the former focuses on This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11259 Figure 1. Paradigm comparison among traditional FCN-based method, Mask Classification-based method, and our proposed Compositor for object segmentation. We show example with single object instance here for simplicity. single-instance and the latter contains more multi-instances scenarios. We show that Compositor generates high-quality semantic parts from pixels which further benefits object segmentation. Quantitatively, Compositor achieves 61.44% and71.78% mIoU on part and object segmentation with the ResNet-50 [22], outperforming single-task specialized MaskFormer [9] by 1.1%and1.6%respectively. We get consistent improvement on Pascal-Part by surpassing Mask-Former by 0.4%and1.7%in terms of part and object mIoU. We further show the robustness of Compositor against occlusion with Occluded-PartImageNet, which is obtained by appending artificial occluders on the original images in PartImageNet following the protocol of OccludedPAS-CAL3D+ [40]. As a result, Compositor outperforms Mask-Former by around 4.4%and7.1%on part and object mIoU respectively. Ablation studies are conducted to validate the effectiveness of our key designs. Qualitative visualization results on both clean images and occluded images are pre-sented. Error analysis is conducted to better understand the model and guide future work. In summary, we make the following contributions in this work: 1. We propose a bottom-up strategy for segmentation, where we first generate parts from pixels followed by compositing parts into objects. This strategy gives us a joint solution for part and object segmentation.
Bao_All_Are_Worth_Words_A_ViT_Backbone_for_Diffusion_Models_CVPR_2023
Abstract Vision transformers (ViT) have shown promise in vari-ous vision tasks while the U-Net based on a convolutional neural network (CNN) remains dominant in diffusion mod-els. We design a simple and general ViT-based architecture (named U-ViT) for image generation with diffusion mod-els. U-ViT is characterized by treating all inputs includ-ing the time, condition and noisy image patches as tokens and employing long skip connections between shallow and deep layers. We evaluate U-ViT in unconditional and class-conditional image generation, as well as text-to-image gen-eration tasks, where U-ViT is comparable if not superior to a CNN-based U-Net of a similar size. In particular, latent diffusion models with U-ViT achieve record-breaking FID scores of 2.29 in class-conditional image generation on Im-ageNet 256 ×256, and 5.48 in text-to-image generation on MS-COCO, among methods without accessing large exter-nal datasets during the training of generative models. Our results suggest that, for diffusion-based image mod-eling, the long skip connection is crucial while the down-sampling and up-sampling operators in CNN-based U-Net are not always necessary. We believe that U-ViT can pro-vide insights for future research on backbones in diffu-sion models and benefit generative modeling on large scale cross-modality datasets.
1. Introduction Diffusion models [24, 56, 61] are powerful deep gener-ative models that emerge recently for high quality image generation [12, 25, 49]. They grow rapidly and find ap-plications in text-to-image generation [47, 49, 51], image-to-image generation [10, 42, 74], video generation [23, 27], *Corresponding to C. Li and J. Zhu. Transformer BlockTransformer BlockTransformer BlockTransformer Block t cEmbedding Layer Linear 0123456 L··· C ··· ······ Transformer BlockEmbeddingsNormMLP Multi -Head AttentionNorm +++: Add C: Concatenate + Linear Transformer Block 𝒙𝑡CRearrange to 3 ×H×WConv3×3 Predicted noise Long skip connection All as wordsFigure 1. The U-ViT architecture for diffusion models, which is characterized by treating allinputs including the time, condition and noisy image patches as tokens and employing (#Blocks-1)/2 long skip connections between shallow and deep layers. speech synthesis [6, 33], and 3D synthesis [46]. Along with the development of algorithms [2, 3, 14, 24, 32, 40, 41, 45, 57, 58, 61, 65], the revolution of backbones plays a central role in diffusion models. A representative example is U-Net based on a convolutional neural network (CNN) employed in prior work [24,59]. The CNN-based U-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22669 Net is characterized by a group of down-sampling blocks, a group of up-sampling blocks, and long skip connections between the two groups, which dominates diffusion mod-els for image generation tasks [12, 47, 49, 51]. On the other hand, vision transformers (ViT) [15] have shown promise in various vision tasks, where ViT is comparable or even supe-rior to CNN based approaches [9,20,35,62,75]. Therefore, a very natural question arises: whether the reliance of the CNN-based U-Net is necessary in diffusion models? In this paper, we design a simple and general ViT-based architecture called U-ViT (Figure 1). Following the de-sign methodology of transformers, U-ViT treats all inputs including the time, condition and noisy image patches as tokens. Crucially, U-ViT employs long skip connections between shallow and deep layers inspired by U-Net. In-tuitively, low-level features are important to the pixel-level prediction objective in diffusion models and such connec-tions can ease the training of the corresponding prediction network. Besides, U-ViT optionally adds an extra 3 ×3 con-volutional block before output for better visual quality. See a systematical ablation study for all elements in Figure 2. We evaluate U-ViT in three popular tasks: unconditional image generation, class-conditional image generation and text-to-image generation. In all settings, U-ViT is compara-ble if not superior to a CNN-based U-Net of a similar size. In particular, latent diffusion models with U-ViT achieve record-breaking FID scores of 2.29 in class-conditional im-age generation on ImageNet 256 ×256, and 5.48 in text-to-image generation on MS-COCO, among methods without accessing large external datasets during the training of gen-erative models. Our results suggest that the long skip connection is cru-cial while the down/up-sampling operators in CNN-based U-Net are not always necessary for image diffusion mod-els. We believe that U-ViT can provide insights for future research on diffusion model backbones and benefit genera-tive modeling on large scale cross-modality datasets.
An_ZBS_Zero-Shot_Background_Subtraction_via_Instance-Level_Background_Modeling_and_Foreground_CVPR_2023
Abstract Background subtraction (BGS) aims to extract all mov-ing objects in the video frames to obtain binary foreground segmentation masks. Deep learning has been widely used in this field. Compared with supervised-based BGS methods, unsupervised methods have better generalization. However, previous unsupervised deep learning BGS algorithms per-form poorly in sophisticated scenarios such as shadows or night lights, and they cannot detect objects outside the pre-defined categories. In this work, we propose an unsuper-vised BGS algorithm based on zero-shot object detection called Zero-shot Background Subtraction (ZBS). The pro-posed method fully utilizes the advantages of zero-shot ob-ject detection to build the open-vocabulary instance-level background model. Based on it, the foreground can be ef-fectively extracted by comparing the detection results of new frames with the background model. ZBS performs well for sophisticated scenarios, and it has rich and extensible cat-egories. Furthermore, our method can easily generalize to other tasks, such as abandoned object detection in unseen environments. We experimentally show that ZBS surpasses state-of-the-art unsupervised BGS methods by 4.70 %F-Measure on the CDnet 2014 dataset. The code is released athttps://github.com/CASIA-IVA-Lab/ZBS .
1. Introduction Background subtraction (BGS) is a fundamental task in computer vision applications [7], such as autonomous navigation, visual surveillance, human activity recognition, etc[15]. BGS aims to extract all moving objects as fore-ground in each video frame and outputs binary segmenta-*Corresponding Author Traditional method (SuBSENSE)Ground Truth Deep learning-based unsupervised method (RT-SBS-v2)ZBS (Ours) Deep learning-based supervised method (BSUV -Net 2.0)Pixel-level BGSInstance-level BGS ShadowPTZCamera JitterInput frame Figure 1. The performance of different BGS methods. Previous BGS methods based on pixel-level background models may mis-judge noisy background as foreground objects, such as camera-Jitter, PTZ, and shadow. Our method based on an instance-level background model can obtain precise foreground edges, effec-tively reducing the confusion of background pixels as foreground objects. tions. The most straightforward BGS algorithm is to directly compare the current frame with the ”stationary” background image [7]. However, this strategy cannot handle com-plex scenarios, such as dynamic background, illumination changes, and shadows. Therefore, more sophisticated BGS techniques [7, 20, 24, 47] have been proposed in the past decades. The traditional methods improve performance in two aspects. The first is to design more robust feature repre-sentations, including color features [43], edge features [20], motion features [47], and texture features [12]. The sec-ond is to design more suitable background models, such as Gaussian mixture models [35], kernel density estimation models [14], CodeBook [21], ViBe [4], SuBSENSE [33], and PAWCS [34]. The traditional methods have relatively adequate generalization capacity since they are not opti-mized on specific scenarios or categories of objects. How-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6355 ever, these methods only utilize hand-craft features to deter-mine whether each pixel belongs to the foreground. We call these methods pixel-level BGS since they use pixel-based or local pixels-based background models. They are sensi-tive to natural variations such as lighting and weather. Over the years, deep learning-based BGS algorithms have been proposed, including supervised BGS and unsu-pervised BGS. Supervised BGS algorithms have achieved satisfactory performance on CDnet 2014 benchmark [11, 24, 30, 40, 45]. However, these methods usually have to be trained on the first several frames of the test videos, which limits the application to unseen scenarios. Unsupervised algorithms overcome this shortcoming. Most of them com-bine semantic segmentation models into traditional BGS al-gorithms. These algorithms pre-select 12 categories as fore-ground from 150 categories of semantic segmentation mod-els [9]. Existing state-of-the-art unsupervised methods still detect night light and heavy shadows as foreground objects. As shown in Figure 1, it is difficult for pixel-level back-ground model to accurately distinguish the edges of fore-ground objects. To tackle the above problems, we propose a novel back-ground subtraction framework based on zero-shot object detection (ZBS). The zero-shot object detection, or also named open-vocabulary object detection, aims to detect un-seen objects outside of the pre-defined categories [48]. Fig-ure 2 shows the framework of our method. The method includes all-instance detection, instance-level background modeling, and foreground instance selection. In the all-instance detection stage, any zero-shot detector can be used. We use a zero-shot object detection model named Detic [48] as the all-instance detector to transform the raw image pix-els into structured instance representations, including cat-egories, boxes, and masks. In the background model-ing stage, our method builds an instance-level background model based on the motion information of instances. If an object is stationary, our algorithm adds it to the background model. In the foreground instance selection stage, the pro-posed algorithm selects the output of the all-instance detec-tor when the new frame comes. If the instance complies with Rule 2 in Figure 2 (c), it is the foreground in the fi-nal binary mask. Benefiting from the full use of instance information, our instance-level BGS method performs bet-ter in complex scenarios, such as shadows, camera jitter, night scenes, etc. ZBS rarely detects noisy background as foreground objects by mistake. Due to the characteristics of the detector, the proposed method can detect most of the categories in the real world and can detect the unseen fore-ground categories outside the pre-defined categories. ZBS achieves remarkably 4.70 %F-Measure improvements over state-of-the-art unsupervised methods. Our main contributions are listed as follows: • We propose a novel background subtraction frame-work that has the instance-level background model; • The proposed framework uses a zero-shot object de-tection model to obtain a more general and generalized deep learning-based unsupervised BGS algorithm; • Our method achieves the state-of-the-art in all unsu-pervised BGS methods on the CDnet 2014 dataset.
Bhunia_Sketch2Saliency_Learning_To_Detect_Salient_Objects_From_Human_Drawings_CVPR_2023
Abstract Human sketch has already proved its worth in various visual understanding tasks (e.g., retrieval, segmentation, image-captioning, etc). In this paper, we reveal a new trait of sketches – that they are also salient. This is intuitive as sketching is a natural attentive process at its core. More specifically, we aim to study how sketches can be used as a weak label to detect salient objects present in an im-age. To this end, we propose a novel method that empha-sises on how “salient object” could be explained by hand-drawn sketches. To accomplish this, we introduce a photo-to-sketch generation model that aims to generate sequential sketch coordinates corresponding to a given visual photo through a 2D attention mechanism. Attention maps accu-mulated across the time steps give rise to salient regions in the process. Extensive quantitative and qualitative experi-ments prove our hypothesis and delineate how our sketch-based saliency detection model gives a competitive perfor-mance compared to the state-of-the-art.
1. Introduction As any reasonable drawing lesson would have taught you – sketching is an attentive process [24]. This paper sets out to prove just that but in the context of computer vi-sion. In particular, we show that attention information in-herently embedded in sketches can be cultivated to learn image saliency [30, 66, 82, 87]. Sketch research has flourished in the past decade, partic-ularly with the proliferation of touchscreen devices. Much of the utilisation of sketch has anchored around its human-centric nature, in that it naturally carries personal style [6,54], subjective abstraction [7,53], human creativity [20], to name a few. Here we study a sketch-specific trait that has been ignored to date – sketch is also salient. The human visual system has evolved over millions of years to develop the ability to attend [22,79]. This attentive process is ubiquitously reflected in language (i.e., how we describe visual concepts) and art (i.e., how artists attend to different visual aspects). The vision community has also invested significant effort to model this attentive process, *Interned with SketchX CNN Figure 1. Sequential photo-to-sketch generation with 2D-attention to leverage sketch as a weak label for salient object detection. Ag-gregated 2D attention-maps till a particular instant are shown. in the form of saliency detection [47, 67, 76, 80, 82]. The paradox facing the saliency community is however that the attention information has never been present in photos to start with – photos are mere collections of static pixels. It then comes with no surprise that most prior research has resorted to a large amount of pixel-level annotation. Although fully-supervised frameworks [10, 34, 38, 87] have been shown to produce near-perfect saliency maps, its widespread adoption is largely bottlenecked by this need for annotation. To deal with this issue, a plethora of semi/weakly-supervised methods have been introduced, which attempt to use captions [82], class-labels [44], fore-ground mask [66], class activation map (CAM) [69], bound-ing boxes [41], scribbles [85] as weak labels. We follow this push to utilise labels, but importantly introduce sketch to the mix, and show it is a competitive label modality because of the inherently embedded attentive information it possesses. Utilising sketch as a weak label for saliency detection is nonetheless non-trivial. Sketch, primarily being abstract and sequential [22] in nature, portrays significant modal-ity gap with photos. Therefore, we seek to build a frame-work that can connect sketch and photo domains via some auxiliary task. For that, we take inspiration from an actual artistic sketching process, where artists [88] attend to cer-tain regions on an object, then render down the strokes on paper. We thus propose photo-to-sketch generation , where given a photo we aim to generate a sketch stroke-by-stroke, as an auxiliary task to bridge the two domains. However effective in bridging the domain gap, this gen-eration process by default does not generate pixel-wise im-portance values depicting a saliency map. To circumvent this problem, we make clever use of a cross-modal 2D at-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2733 tention module inside the sketch decoding process, which naturally predicts a local saliency map at each stroke – ex-actly akin to how artists refer back to the object before ren-dering the next stroke. More specifically, the proposed photo-to-sketch genera-tor is an encoder-decoder model that takes an RGB photo as input and produces a sequential sketch. The model is augmented with a 2D attention mechanism that importantly allows the model to focus on visually salient regions of the photo associated with each stroke during sketch generation. In doing so, the attention maps accumulated over the time steps of sequential sketch generation would indicate the re-gions on the photo that were of utmost importance. See Fig. 1 for an illustration. To further address the domain gap in supervision between pixel-wise annotation and sketch la-bels, we propose an additional equivariance loss to gain robustness towards perspective deformations [28] thus im-proving overall performance. In our experiments we firstly report the performance of saliency maps directly predicted by the network, without convolving it with any ad hoc post-processing that is com-monplace in the literature [82]. This is to spell out the true effects of using sketch for saliency detection, which is our main contribution. To further evaluate its competitiveness against other state-of-the-arts though, we also plug in our photo-to-sketch decoder in place of the image-captioning branch of a multi-source weak supervision framework that uses class-labels and text-description for saliency learning [82]. We train our network on the Sketchy dataset [56] con-sisting of photo-sketch pairs. It is worth noting that the training data was not intentionally collected with saliency detection in mind, rather the annotators were asked to draw a sketch that depicts the photo shown to them. This again strengthens our argument that sketch implicitly encodes saliency information which encouraged us to use it as a weak label at the first place. In summary, our major contributions are: (a) We for the first time demonstrate the success of using sketch as a weak label in salient object detection. (b) To this end, we make clever use of sequential photo-to-sketch generation frame-work involving an auto-regressive decoder with 2D atten-tion for saliency detection. (c) Comprehensive quantita-tive and ablative experiments delineate that our method of-fers significant performance gain over weakly-supervised state-of-the-arts. Moreover, this is the first work in vision community that validates the intuitive idea of “Sketch is Salient” through a simple yet effective framework.
Gao_ULIP_Learning_a_Unified_Representation_of_Language_Images_and_Point_CVPR_2023
Abstract The recognition capabilities of current state-of-the-art 3D models are limited by datasets with a small number of annotated data and a pre-defined set of categories. In its 2D counterpart, recent advances have shown that simi-lar problems can be significantly alleviated by employing knowledge from other modalities, such as language. In-spired by this, leveraging multimodal information for 3D modality could be promising to improve 3D understanding under the restricted data regime, but this line of research is not well studied. Therefore, we introduce ULIP to learn a unified representation of image, text, and 3D point cloud by pre-training with object triplets from the three modali-ties. To overcome the shortage of training triplets, ULIP leverages a pre-trained vision-language model that has al-ready learned a common visual and textual space by train-ing with massive image-text pairs. Then, ULIP learns a 3D representation space aligned with the common image-text space, using a small number of automatically synthesized triplets. ULIP is agnostic to 3D backbone networks and can easily be integrated into any 3D architecture. Experiments show that ULIP effectively improves the performance of multiple recent 3D backbones by simply pre-training them on ShapeNet55 using our framework, achieving state-of-the-art performance in both standard 3D classification and zero-shot 3D classification on ModelNet40 and ScanOb-jectNN. ULIP also improves the performance of PointMLP by around 3% in 3D classification on ScanObjectNN, and outperforms PointCLIP by 28.8% on top-1 accuracy for zero-shot 3D classification on ModelNet40. Our code and pre-trained models will be released. *Contact: [email protected]. Introduction 3DEncoderAn image of a small private jet.ImageEncoderTextEncoder Small Scale Triplets Pre-aligned Figure 1. Illustration of ULIP. ULIP improves 3D understanding by aligning features from image, text and point cloud in the same space. To reduce the demand of 3D data, ULIP leverages image and text encoders that are pre-trained with large-scale image-text pairs, and aligns 3D representation to the pre-aligned image-text feature space using a small scale of training triplets. 3D visual understanding research [6, 12, 16, 17, 24, 25] is drawing significant attention in recent years due to the increasing demand of real-world applications such as aug-mented/virtual reality [1, 27, 31, 46], autonomous driv-ing [23, 57] and robotics [2, 48]. However, compared to their 2D counterpart, 3D visual recognition research is still limited by datasets with a small number of samples and a This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1179 small set of pre-determined categories [45, 51]. For exam-ple, ShapeNet55 [3], one of the largest publicly available 3D datasets, only contains around 52.5k samples of 3D ob-jects with 55 category labels. That is in contrast to the 2D domain, where ImageNet [7] contains millions of images that cover thousands of categories. This scale limit of 3D data, caused by the high cost of 3D data collection and an-notation [3,11,51,58], has been hindering the generalization of 3D recognition models and their real-world applications. To tackle the shortage of annotated data, existing work in other domains shows that employing knowledge from different modalities can significantly help the concept un-derstanding in the original modality [39, 53]. Among such work, CLIP [39] pioneered alignment between visual and textual features by pre-training on large-scale image-text pairs. It improves state-of-the-art visual concept recogni-tion and enables zero-shot classification of unseen objects. However, multimodal learning that involves 3D modality, and whether it can help 3D recognition tasks are still not well studied. In this paper, we propose Learning a Unified Represen-tation of Language, Images, and Point Clouds (ULIP). An illustration of our framework is shown in Figure 1. Ob-taining a unified representation space of all 3modalities re-quires large-scale triplets of image, text, and point cloud as training data. However, such triplets remain hard to col-lect compared to the large-scale image-text pairs available. To circumvent the lack of triplet data, we take advantage of a vision-language model pretrained on massive image-text pairs, and align the feature space of a 3D point cloud encoder to the pre-aligned vision/language feature space. When training the 3D encoder for space alignments, we use a small number of automatically synthesized triplets from ShapeNet55 [3] without requiring manual annotations. Making use of a pretrained vision-language model lets us leverage the abundant semantics captured in the image-text feature space for 3D understanding. Our framework uses CLIP as the vision and language model because of its ex-cellent generalization performance. During pre-training, we keep the CLIP model frozen and train the 3D encoder by aligning the 3D feature of an object with its correspond-ing textual and visual features from CLIP using contrastive learning. The pre-trained 3D backbone model can be fur-ther fine-tuned for different downstream tasks. ULIP has three major advantages. First, ULIP can sub-stantially improve the recognition ability of 3D backbone models. Second, ULIP is agnostic to the architecture of 3D models; therefore, we can easily plug in any 3D back-bones and improve them with ULIP. Third, aligning three modalities in the same feature space can potentially enable more cross-domain downstream tasks, including zero-shot 3D classification and image-to-3D retrieval. We quantitatively evaluate ULIP on two fundamental 3Dtasks: standard 3D classification and zero-shot 3D classi-fication. We experiment with recent 3D networks includ-ing PointNet++ [36], PointMLP [29] and PointBERT [58]. Experimental results show that ULIP achieves state-of-the-art (SOTA) performance for both standard 3D classifica-tion and zero-shot 3D classification on ModelNet40 and ScanObjectNN. Specifically, ULIP surpasses PointMLP by around 3% in standard 3D classification on ScanOb-jectNN [45]. ULIP also outperforms PointCLIP [59] (the previous SOTA) by around 28.8% top-1 accuracy in zero-shot 3D classification on ModelNet40. Moreover, we show-case the potential of applying ULIP on the image to point cloud retrieval task. Qualitative evaluation demonstrate our promising potential for cross-modal applications. 2. Related Work Multi-modal Representation Learning. Most existing multimodal approaches are about image and text modali-ties. Among these methods, one line of research focuses on learning interaction between image regions and caption words [4, 18, 19, 21, 28, 43] using transformer-based archi-tectures. These methods show great predictive capability while being costly to train. The other line of research, such as CLIP [39], uses image and text encoders to output a sin-gle image/text representation for each image-text pair, and then aligns the representations from both modalities. This simple architecture makes training with massive noisy web data efficient, facilitating its zero-shot generalization capa-bility. The success of CLIP has promoted many image-text re-lated research directions, including text-based image ma-nipulation [34], open vocabulary object detection [10, 13] and language grounding [20]. Some recent works explore how multi-modal information can help 3D understanding and show promising results [5, 56]. The most related method to our work is PointCLIP [59]. It first converts the 3D point cloud into a set of depth maps and then lever-ages CLIP directly for zero-shot 3D classification. Unlike PointCLIP, which targets reshaping the task of point cloud and text matching to image and text alignment, our method learns a unified representation among image, text, and point cloud that substantially improves 3D understanding. 3D Point Cloud Understanding. There are mainly two streams of research lines for point cloud modeling. One is projecting a point cloud into 3D voxels [30, 42] and then using 2D/3D convolutions for feature extraction. Point-Net [35] explores ingesting 3D point clouds directly. It extracts permutation-invariant feature from the point cloud that significantly impacts point-based 3D networks. Point-Net++ [36] proposes a hierarchical neural network that ex-tracts local features with increasing contextual scales. Re-cently, PointMLP [29] proposes a pure residual MLP net-work and achieves competitive results without integrating 1180 sophisticated local geometrical extractors. Moreover, self-supervised learning for 3D point clouds has also shown promising performance in 3D understanding field. Point-BERT [58] adopts mask language modeling from BERT [8] to the 3D field, where it tokenizes 3D patches using an ex-ternal model, randomly masks out 3D tokens, and predicts them back during pre-training. A more recent work, Point-MAE [33], directly operates the point cloud by masking out 3D patches and predicting them back using L2 loss. Our method is orthogonal to the above 3D encoders. Their per-formance on 3D recognition can be potentially improved by ULIP with no/minor modification. 3. Learning Unified Representation of Lan-guage, Image and Point Cloud ULIP learns a unified representation space of language, images, and 3D point clouds via pre-training on triplets from these three modalities. In this section, we first intro-duce how we create such triplets for pre-training. Then, we present our pre-training framework. 3.1. Creating Training Triplets for ULIP We build our dataset of triplets from ShapeNet55 [3], which is one of the most extensive public 3D CAD datasets. ShapeNet55 is the publicly-available subset of ShapeNet. It contains around 52.5K CAD models, each of which is as-sociated with metadata that textually describes the semantic information of the CAD model. For each CAD model iin the dataset, we create a triplet Ti: (Ii, Si, Pi)of image Ii, text description Siand point cloud Pi. ULIP will then use these triplets for pre-training. Point Cloud Generation . We directly use the generated point cloud of each CAD model in ShapeNet55. We uni-formly sample Nppoints from the original point cloud. During pre-training, standard data augmentation techniques of 3D point clouds are performed, including random point drop, random scaling point cloud, shift point cloud and ro-tate perturbation. Then a 3D encoder takes t
he augmented point cloud Pias input and outputs its 3D representation hP i via hP i=fP(Pi), (1) where fP(·)represents the 3D backbone encoder. Multi-view Image Rendering . ShapeNet55 CAD models do not come with images. To obtain images that semanti-cally align well with each CAD model, we synthesize multi-view images of each CAD model by placing virtual cameras around each object and rendering the corresponding RGB images and depth maps from each viewpoint.1Specifically, we render an RGB image with a depth map for every 12 1We utilize the following repository with their default settings in prac-tice. https://github.com/panmari/stanford-shapenet-rendererdegrees. Therefore, we get 30 RGB images and 30 depth maps for each object, 60 image candidates in total. Dur-ing each iteration of pre-training, we randomly select one image or depth map from each CAD model’s 60 renderred candidates as Iiand take Iias input of the image encoder fI(·)to extract the image feature hI i, hI i=fI(Ii). (2) Text Generation . We leverage the metadata that comes with each CAD model as the corresponding text descrip-tion. The metadata includes a synset of taxonomy as a tex-tual description of each CAD model. For each word in the metadata, we adopt simple prompts to construct meaning-ful sentences that will be utilized during pre-training. We follow prior works [10, 13] that use 63 prompts such as ”a picture of [WORD]” in image-text pre-training tasks and additionally add a dedicated prompt “a point cloud model of [WORD]” to accommodate the 3D modality. In each train-ing iteration, we randomly choose a word from the metadata and apply the 64 templates on the word to build a set of text descriptions, Si. Then we input Siinto our text encoder fS(·)and get a set of representations, respectively. Finally, we conduct average pooling over the set of outputs as the text-domain representation hS iof object i, hS i=Avg(fS(Si)). (3) 3.2. Aligning Representations of Three Modalities With the created triplets of image, text, and point cloud, ULIP conducts pre-training to align representations of all three modalities into the same feature space. Specifically, we take advantage of pre-trained vision-language models, i.e., CLIP, and train a 3D encoder by aligning the 3D fea-ture with the features of image and text encoders ( fI(·)and fS(·)) of CLIP. By doing so, we hope that the abundant se-mantics already captured and aligned by CLIP’s encoders can be employed for better 3D understanding. The result-ing unified feature space enables numerous cross-modal ap-plications among these three modalities and potentially im-proves the 3D recognition performance of the underlying 3D backbone encoder fP(·). Cross-modal Contrastive Learning. As shown in Figure 2, for an object i, features hI i,hS iandhP iare extracted from image, text, and 3D point cloud encoders. Then contrastive loss among each pair of modalities is computed as follows, L(M1,M2)=X (i,j)−1 2logexp hM1 ihM2 j τ P kexp hM1 ihM2 k τ −1 2logexp hM1 ihM2 j τ P kexp hM1 khM2 j τ,(4) 1181 Classification headFine -tuning on Standard 3D Classification Zero Shot 3D Classification An image of a small private jet.Pretrained 3D Encoder CupVase Piano …Pretrained Text EncoderA point cloud model of a {class}. T2T3…TN A point cloud model of a {piano }.T1Car P1…P2P3…PN P1… Text EncoderImage EncoderMulti -modal Pre-training I2I1 I3 … IN T2T1 T3 … TNI1.P1I1.P2I1.P3…I1.PN I2.P1I2.P2I2.P3…I2.PN I3.P1I3.P2I3.P3…I3.PN … … … …… IN.P1IN.P2IN.P3 …IN.PN T1.P1T1.P2T1.P3 …T1.PN T2.P1T2.P2T2.P3 …T2.PN T3.P1T3.P2T3.P3 …T3.PN … … … …… TN.P1TN.P2TN.P3 …TN.PNPre-alignedBike Table Car Vase Cup 3D Encoder Pretrained 3D EncoderT1.P1T2.P1T3.P1 …TN.P1 Figure 2. Illustration of our method. The inputs of multimodal pre-training ( Left) are a batch of objects represented as triplets (image, text, point cloud). Image and text features are extracted from a pre-trained (frozen) vision and language model such as CLIP, and 3D features are extracted from a 3D encoder. Contrastive losses are applied to align the 3D feature of an object to its image and text features during pre-training. The pre-trained 3D encoders are further fine-tuned in downstream tasks, including standard 3D classification ( Top Right ) and zero-shot 3D classification ( Bottom Right ). where M1andM2represent two modalities and (i, j)indi-cates a positive pair in each training batch. We use a learn-able temperature parameter τas well, similar to CLIP [39]. Finally, we minimize L(M1,M2)for all modality pairs with different coefficients, Lfinal =αL(I,S)+βL(I,P)+θL(P,S). (5) By default, αis set to be constant 0, βandθare set to be 1 equally; because during pre-training, we find that if we update CLIP’s image and text encoders, catastrophic for-getting will emerge due to our limited data size. This will lead to a significant performance drop when applying ULIP to downstream tasks. Therefore we freeze the weights of fS(·)andfI(·)during the entire pre-training and only up-datefP(·)withLfinal . 4. Experiments To demonstrate the benefits of pre-training 3D backbone networks using ULIP, we conduct experiments on two 3D tasks: a standard 3D classification task that involves a single modality and a zero-shot 3D classification task that involves multimodal inputs. In this section, we first present ex-perimental settings, including our experimenting 3D back-bones, downstream datasets, and implementation details. Then we present the quantitative results of standard 3D classification and zero-shot 3D classification, respectively.Lastly, we include analyses of our model and show results on cross-modal retrieval. 4.1. 3D Backbone Networks We experiment with the following 3D backbone net-works under our framework. PointNet++ [36] is an advanced version of PointNet [35]. It uses a hierarchical structure to better capture the local geometry of the point cloud, and becomes the cornerstone of many point cloud applications. PointBERT [58] utilizes a transformer architecture for point cloud feature extraction. It improves its recogni-tion ability by conducting self-supervised pre-training on ShapeNet55. PointMLP [29] is the SOTA method on standard 3D clas-sification task. It uses a residual MLP network with a lightweight geometric affine module to better capture local geometric features. 4.2. Downstream Datasets We use the following two datasets for both standard and zero-shot 3D classification. ModelNet40 is a synthetic dataset of 3D CAD models. It contains 9,843 training samples and 2,468 testing samples, covering 40 categories.2 2For each CAD model, we utilized preprocessed point cloud from [29]. 1182 Model Overall Acc Class-mean Acc PointNet [35] 68.2 63.4 PointNet++ [36] 77.9 75.4 DGCNN [49] 78.1 73.6 MVTN [15] 82.8 – PointBERT [58] 83.1 – RepSurf-U [40] 84.6 – PointMAE [33] 85.2 – RepSurf-U (2x) [40] 86.0 – PointBERT [58] 83.1 – PointBERT + ULIP 86.4 (↑3.3) – PointMLP [29] 85.7 84.4 PointMLP+ ULIP 88.8 (↑3.1) 87.8 (↑3.4) PointMLP † 86.5 85.1 PointMLP †+ ULIP 89.4 (↑2.9) 88.5 (↑3.4) Table 1. 3D classification results on ScanObjectNN. ULIP signifi-cantly improves our baselines. Our best result outperforms SOTA largely by around 3% on Overall Acc. †indicates a model uses 2K sampled points and all others use 1K sampled points. ScanObjectNN is a dataset of scanned 3D objects from the real world. It contains 2,902 objects that are categorized into 15 categories. It has three variants: OBJ ONLY in-cludes ground truth segmented objects extracted from the scene meshes datasets; OBJ BJhas objects attached with background noises and Hardest introduces perturbations such as translation, rotation, and scaling to the dataset [45].3 4.3. Implementation Details Pre-training. For the 3D input, we uniformly sample Np=1024, 2048, or 8192 points for accommodating the requirements of different backbones. The inputs of image and text modalities are generated as described in Section 3.1. During pre-training, we utilize an advanced version of CLIP, namely SLIP [32], that shows superior performance as our image-text encoders. As mentioned in Section 3.2, we freeze the image and text encoders and only update the 3D encoder’s parameters during pre-training. ULIP is trained for 250 epochs. We use 64as the batch size, 10−3 as the learning rate, and AdamW as the optimizer. Standard 3D Classification. On ModelNet40, we use the learning rate as 0.00015 and fine-tune our model for 200 epochs, with the batch size as 24 for PointNet++. For PointMLP, we set the learning rate as 0.1 and fine-tune the model for 300 epochs, with the batch size as 32. On ScanObjectNN, we use the learning rate of 0.03 and finetune for 350 epochs with batch size 32 for PointMLP. For PointBERT, we use the learning rate of 0.0002 and fine-tune for 300 epochs with batch size 32. 3We used the variants provided by [58] in our experiments.Zero-Shot 3D Classification. Following [59], zero-shot 3D classification is conducted by measuring distances be-tween the 3D features of an object and the text features of category candidates. The category that introduces the small-est distance is selected as the predicted category, as shown in Figure 2. We use our pre-trained models as they are when performing zero-shot classification. There is no finetuning stage involved. We keep using the same prompt strategy as it is during pre-training when constructing text features for each category candidate in this task. All of our experiments are conducted using PyTorch. Pre-training and finetuning experiments use 8 and 1 A100 GPUs, respectively. 4.4. Standard 3D Classification We demonstrate the effectiveness of ULIP by improving different 3D classification baselines. We follow the original settings of the baselines in our experiments. When apply-ing ULIP, the only difference is that we pre-train the 3D networks under our framework before finetuning them with the labeled point cloud. Since the structure of the 3D back-bone is unchanged, our framework does not introduce ex-tra latency during inference time. For all experiments, we follow the community practice of using OA (Overall Accu-racy) and mAcc (Class Average Accuracy) as our evaluation metrics. Experimental Results. We present the standard 3D clas-sification performances of our baselines and our methods on ScanObjectNN in Table 7. As shown, the performances of our baselines are significantly improved by ULIP. Specifi-cally, our framework improves PointBERT and PointMLP significantly by around 3%. When we apply ULIP on the strongest backbone, PointMLP, ULIP+PointMLP †achieves the new SOTA performance, and outperforms p
Bai_Sliced_Optimal_Partial_Transport_CVPR_2023
Abstract Optimal transport (OT) has become exceedingly popu-lar in machine learning, data science, and computer vision. The core assumption in the OT problem is the equal to-tal amount of mass in source and target measures, which limits its application. Optimal Partial Transport (OPT) is a recently proposed solution to this limitation. Similar to the OT problem, the computation of OPT relies on solving a linear programming problem (often in high dimensions), which can become computationally prohibitive. In this pa-per, we propose an efficient algorithm for calculating the OPT problem between two non-negative measures in one dimension. Next, following the idea of sliced OT distances, we utilize slicing to define the Sliced OPT distance. Finally, we demonstrate the computational and accuracy benefits of the Sliced OPT-based method in various numerical exper-iments. In particular, we show an application of our pro-posed Sliced OPT problem in noisy point cloud registration and color adaptation. Our code is available at Github Link.
1. Introduction The Optimal Transport (OT) problem studies how to find the most cost-efficient way to transport one probabil-ity measure to another, and it gives rise to popular prob-ability metrics like the Wasserstein distance. OT has at-tracted abundant attention in data science, statistics, ma-chine learning, signal processing, and computer vision [1, 12, 13, 21, 24, 31, 37, 39, 47, 49] A core assumption in the OT problem is the equal to-tal amount of mass in the source and target measures (e.g., probability measures). Many practical problems, however, deal with comparing non-negative measures with varying total amounts of mass, e.g., shape analysis [9, 46], do-main adaptation [17], color transfer [10]. In addition, OT These authors contributed equally to this work.distances are often not robust to outliers and noise, as transporting outliers could be prohibitively expensive and might compromise the distance estimation. To address these issues, many variants of the OT problem have been recently proposed, for example, the optimal partial trans-port (OPT) problem [6, 18, 19], the Hellinger–Kantorovich distance [9, 36], unnormalized optimal transport [22], and Kantorovich–Rubinstein norm [25,35]. These variants were subsequently unified under the name “unbalanced optimal transport” [11, 36]. The computational complexity of linear programming for balanced and partial OT problems is often a bottleneck for solving large-scale problems. Different approaches have been developed to address this issue. For instance, by en-tropic regularization, the problem becomes strictly convex and can be solved with the celebrated Sinkhorn–Knopp al-gorithm [14,45] which has been extended to the unbalanced setting [10]. This approach can still be computationally ex-pensive for small regularization levels. Other strategies ex-ploit specific properties of ground costs. For example, if the ground cost is determined by the unique path on a tree, the problem can be efficiently solved in the balanced [33, 41] and the unbalanced setting [44]. In particular, balanced 1-dimensional transport problems with convex ground costs can be solved by the northwest corner rule, which essen-tially amounts to sorting the support points of the two input measures. Based on this, another popular method is the sliced OT approach [5, 30, 43], which assumes the ground cost is consistent with the Euclidean distance (in 1-dimensional space). Furthermore, it has been shown [30, 32, 44] that the OT distance in Euclidean space can be approximated by the OT distance in 1-dimensional Euclidean space. Inspired by these works, in this paper, we propose the sliced version of OPT and an efficient computational algorithm for empiri-cal distributions with uniform weights, i.e., measures of the formPn i=1xi, wherexis the Dirac measure. Our contri-butions in this paper can be summarized as follows: We propose a primal-dual algorithm for 1-dimensional This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13681 OPT with a quadratic worst-case time complexity and linear or quadratic complexity in practice. Ind-dimensional space, we propose the Sliced-OPT (SOPT) distance. Similar to the sliced OT distance, we prove that it satisfies the metric axioms and propose a computational method based on our 1-dimensional OPT problem solver. We demonstrate an application of SOPT in point cloud registration by proposing a SOPT variant of the iter-ative closest point (ICP) algorithm. Our approach is robust against noise. Also, we apply SOPT to a color adaptation problem (see the supplementary material).
Huang_Siamese_DETR_CVPR_2023
Abstract Recent self-supervised methods are mainly designed for representation learning with the base model, e.g., ResNets or ViTs. They cannot be easily transferred to DETR, with task-specific Transformer modules. In this work, we present Siamese DETR , aSiamese self-supervised pretraining approach for the Transformer architecture in DETR . We consider learning view-invariant and detection-oriented representations simultaneously through two com-plementary tasks, i.e., localization and discrimination, in a novel multi-view learning framework. Two self-supervised pretext tasks are designed: (i) Multi-View Re-gion Detection aims at learning to localize regions-of-interest between augmented views of the input, and (ii) Multi-View Semantic Discrimination attempts to improve object-level discrimination for each region. The proposed Siamese DETR achieves state-of-the-art transfer perfor-mance on COCO and PASCAL VOC detection using dif-ferent DETR variants in all setups. Code is available at https://github.com/Zx55/SiameseDETR.
1. Introduction Object detection with Transformers (DETR) [3] combines convolutional neural networks (CNNs) and Transformer-based encoder-decoders, viewing object detection as an end-to-end set prediction problem. Despite its impressive performance, DETR and its variants still rely on large-scale, high-quality training data. It generally requires huge cost and effort to collect such massive well-annotated datasets, which can be prohibited in some privacy-sensitive applications such as medical imaging and video surveillance. Recent progress in multi-view self-supervised represen-*Equal contribution. †Corresponding author. CNN!4'and̂(EncoderCNN!4'"and̂("Encoder!"!#CNNDecoderLossℒ"DecoderLossℒ (a)(b),*/+,"*#/+#Encoderconditioned on !!sharing weightsUnsupervised objectivesUnsupervised objectivesFigure 1. Comparison between single-view and multi-view detec-tion pretraining for DETR. (a)The single-view framework, e.g., UP-DETR [9] and DETReg [1], perform self-supervised represen-tation learning using unsupervised objectives generated on the sin-gle view, e.g., random patches (UP-DETR) or pseudo labels (DE-TReg), leading to a small information gain during pretraining. (b) The proposed multi-view Siamese DETR for DETR pretraining. Here, ˆbandˆpdenote box and semantic predictions. q,kandv denote query, key and value in DETR, respectively. tation learning [4, 6–8, 14, 15, 21] can potentially allevi-ate the appetite for labeled data in training DETR for ob-ject detection. However, these self-supervised learning ap-proaches mainly focus on learning generalizable representa-tions with base models, such as ResNets [17] and ViTs [11]. It is unclear how these approaches can be effectively ex-tended to DETR with task-specific Transformers modules that are tailored for end-to-end object detection. Designing self-supervised pretext tasks for pretraining the Transformers in DETR is a challenging and practical problem, demanding representations that could benefit ob-ject detection, beyond just learning generic representation. Several attempts have been made to address this issue. For example, UP-DETR [9] introduces an unsupervised pre-text task based on random query patch detection, predicting bounding boxes of randomly-cropped query patches in the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15722 given image. Recent DETReg [1] employs a pre-trained SwA V [5] and offline Selective Search proposals [28] to provide pseudo labels for DETR pertaining. In general, both UP-DETR and DETReg follow a single-view pretrain-ing paradigm (see Figure 1 (a)), without exploring the abil-ity of learning view-invariant representations demonstrated in existing multi-view self-supervised approaches. In this work, we are interested in investigating the effectiveness of multi-view self-supervised learning for DETR pre-training. Different from conventional multi-view framework [5, 6, 15], we combine the Siamese net-work with the cross-attention mechanism in DETR, present-ing a Siamese self-supervised pretraining approach, named Siamese DETR, with two proposed self-supervised pre-text tasks dedicated to view-invariant detection pretrain-ing. Specifically, given each unlabeled image, we follow [1, 31] to obtain the offline object proposals and generate two augmented views guided by Intersection over Union (IoU) thresholds. As illustrated in Figure 1 (b), by directly locating the query regions between augmented views and maximizing the discriminative information at both global and regional levels, Siamese DETR can learn view-invariant representations with localization and discrimination that are aligned with downstream object detection tasks during pre-training. Our contributions can be summarized as below: • We propose a novel Siamese self-supervised approach for the Transformers in DETR, which jointly learns view-invariant representations with discrimination and localization. In particular, we contribute two new de-signs of self-supervised pretext tasks specialized for multi-view detection pretraining. • Without bells and whistles, Siamese DETR outper-forms UP-DETR [9] and DETReg [1] with multiple DETR variants, such as Conditional [26] and De-formable [38] DETR, on the COCO and PASCAL VOC benchmarks, demonstrating the effectiveness and versatility of our designs.
Bao_SINE_Semantic-Driven_Image-Based_NeRF_Editing_With_Prior-Guided_Editing_Field_CVPR_2023
Abstract Despite the great success in 2D editing using user-friendly tools, such as Photoshop, semantic strokes, or even text prompts, similar capabilities in 3D areas are still lim-ited, either relying on 3D modeling skills or allowing edit-ing within only a few categories. In this paper, we present a novel semantic-driven NeRF editing approach, which en-ables users to edit a neural radiance field with a single im-age, and faithfully delivers edited novel views with high fi-delity and multi-view consistency. To achieve this goal, we propose a prior-guided editing field to encode fine-grained geometric and texture editing in 3D space, and develop a series of techniques to aid the editing process, including cyclic constraints with a proxy mesh to facilitate geomet-ric supervision, a color compositing mechanism to stabi-lize semantic-driven texture editing, and a feature-cluster-based regularization to preserve the irrelevant content un-changed. Extensive experiments and editing examples on both real-world and synthetic data demonstrate that our method achieves photo-realistic 3D editing using only a single edited image, pushing the bound of semantic-driven editing in 3D real-world scenes.
1. Introduction Semantic-driven editing approaches, such as stroke-based scene editing [34, 39, 66], text-driven image synthe-sis and editing [1, 50, 53], and attribute-based face edit-ing [27, 60], have greatly improved the ease of artistic cre-ation. However, despite the great success of 2D image edit-*Authors contributed equally. †Corresponding authors.ing and neural rendering techniques [14, 42], similar edit-ing abilities in the 3D area are still limited: (1)they re-quire laborious annotation such as image masks [27, 71] and mesh vertices [69, 74] to achieve the desired manipula-tion; (2)they conduct global style transfer [12,13,16,20,75] while ignoring the semantic meaning of each object part (e.g., windows and tires of a vehicle should be textured differently); (3)they can edit on categories by learning a textured 3D latent representation ( e.g., 3D-aware GANs with faces and cars etc.) [6, 8, 9, 17, 45, 56, 59, 60], or at a coarse level [35, 64] with basic color assignment or object-level disentanglement [30], but struggle to conduct texture editing on objects with photo-realistic textures or out-of-distribution characteristics. Based on this observation, we believe that, on the way toward semantic-driven 3D editing, the following proper-ties should be ensured. First, the operation of editing should be effortless, i.e., users can edit 3D scenes on a sin-gle 2D image in convenient ways, e.g., using off-the-shelf tools such as GAN-based editing [28, 34], text-driven edit-ing [1, 53], Photoshop, or even a downloaded Internet im-age without pixel-wise alignment, rather than steering 3D modeling software with specific knowledge [69], or repeat-edly editing from multi-view images. Second, the editing method should be applicable to real-world scenes or objects and preserve vivid appearances, which is beyond existing 3D-aware generative models [8, 9] due to the limited cate-gories and insufficient data diversity on real-world objects. To fulfill this goal, we propose a novel Semantic-driven Image-based Editing approach for Neural radiance field in real-world scenes, named SINE. Specifically, our method allows users to edit a neural radiance field with a sin-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20919 gle image, i.e., either by changing a rendered image us-ing off-the-shelf image editing tools or providing an im-age for texture transferring (see Sec. 4.4), and then deliv-ers edited novel views with consistent semantic meaning. Unlike previous works that directly fine-tune the existing NeRF model [30, 35, 64], SINE learns a prior-guided edit-ing field to encode geometric and texture changes over the original 3D scene (see Fig. 2), thus enabling fine-grained editing ability. By leveraging guidance from existing neu-ral priors (shape prior models [15] and Vision Transformer models [7], etc.), SINE can directly perform semantic-driven editing on photo-realistic scenes without pre-training a category-level latent space. For example, in Fig. 1, users can stretch a car’s back or change all four tires to cook-ies by only editing a single image, and can even cooperate with text-prompts editing [1] to modify a specific object of a scene with vivid appearances. However, even when guided with neural priors, editing NeRF from a single image with multi-view consistency and accuracy is still challenging. (1)The generic NeRF does not necessarily provide an explicit surface or signed dis-tance field, such that it cannot directly work with shape pri-ors [15]. Therefore, we propose to use cyclic constraints with a proxy mesh to represent the edited NeRF’s geom-etry, which facilitates guided editing using coarse shape prior. (2)Learning a coordinate-based 3D editing field us-ing a single edited view is not sufficient to capture fine-grained details, and applying semantic supervision [7, 52] directly to the editing field leads to sub-optimal conver-gence (see Sec. 4.5). To tackle these challenges, we propose a color compositing mechanism by first rendering the tem-plate NeRF color and modification color individually, and then deferred blending them to yield the edited view, which significantly improves semantic-driven texture editing. (3) Ideally, a user’s editing should only affect the desired re-gions while maintaining other parts untouched. However, in semantic-driven editing, the prior losses require taking the full shape or image as input, which leads to appearance or shape drifting at the undesired area. To precisely con-trol the editing while excluding irrelevant parts from being affected, we generate feature clusters of the editing area us-ing the ViT-based feature field [7,30], and use these clusters to distinguish whether a location is allowed to be edited or should remain unchanged. In summary, the contributions of our paper are as fol-lows. (1)We propose a novel semantic-driven image-based NeRF editing approach, called SINE, which allows users to edit a neural radiance field simply on just a single view of the rendering. SINE leverages a prior-guided editing field to encode fine-grained geometry and texture changes over the given pre-trained NeRF, thus delivering multi-view consis-tent edited views with high fidelity. (2)To achieve seman-tic editing functionality, we develop a series of techniques,including cyclic constraints with a proxy mesh for geomet-ric editing, the color compositing mechanism to enhance texture editing, and the feature-cluster-based regularization to control the affected editing area and maintain irrelevant parts unchanged. (3)Experiments and editing examples on both real-world/synthetic and object-centric/unbounded 360◦scenes data demonstrate superior editing capabilities and quality with effortless operations.
Feng_NVTC_Nonlinear_Vector_Transform_Coding_CVPR_2023
Abstract In theory, vector quantization (VQ) is always better than scalar quantization (SQ) in terms of rate-distortion (R-D) performance [33]. Recent state-of-the-art methods for neural image compression are mainly based on nonlinear transform coding (NTC) with uniform scalar quantization, overlooking the benefits of VQ due to its exponentially in-creased complexity. In this paper, we first investigate on some toy sources, demonstrating that even if modern neu-ral networks considerably enhance the compression per-formance of SQ with nonlinear transform, there is still an insurmountable chasm between SQ and VQ. Therefore, re-volving around VQ, we propose a novel framework for neu-ral image compression named Nonlinear Vector Transform Coding (NVTC). NVTC solves the critical complexity is-sue of VQ through (1) a multi-stage quantization strategy and (2) nonlinear vector transforms. In addition, we apply entropy-constrained VQ in latent space to adaptively deter-mine the quantization boundaries for joint rate-distortion optimization, which improves the performance both theo-retically and experimentally. Compared to previous NTC approaches, NVTC demonstrates superior rate-distortion performance, faster decoding speed, and smaller model size. Our code is available at https://github.com/ USTC-IMCL/NVTC .
1. Introduction Recent works based on nonlinear transform coding (NTC) [5] have achieved remarkable success in neural im-age compression [12, 34]. Unlike these traditional image codecs that employ linear transform such as discrete co-sine transform (DCT), NTC is constructed with the nonlin-ear transform layers and optimized with data-driven tech-niques, where the modern neural networks present excellent capability in both encoding/decoding transform and entropy estimation [12, 20, 35]. Most NTC methods apply scalar quantization (SQ) to discretize the latent variables and use the additive uniform noise to approximate the quantization error during training [6]. However, in the era of 1990s, it Figure 1. BD-rate vs. decoding time vs. model size on the CLIC2021 validation set [1]. has already been known that vector quantization, in spite of it exponentially increased complexity, is always better than SQ in terms of rate-distortion (RD) performance [33]. It in-spires us to design a novel neural image compression model to fully leverages vector quantization. Vector quantization (VQ) [18] is designed to map a con-tinuous source distribution to a set of discrete vectors. The discrete nature of VQ has been successfully applied in gen-erative models to avoid the “posterior collapse” issue, in-cluding these well-known image synthesis models such as VQV AE [40], VQGAN [16] and text-to-image synthesis models such as DALLE [36], latent diffusion [38]. How-ever, if we go back to the basic requirement of quantiza-tion, we will find that VQ offers unique advantages in terms of rate-distortion performance, particularly the space-filling advantage and the memory advantage [33]. Given a source distribution, the goal of quantization (no matter SQ or VQ) is to determine the quantization centers and boundaries, and then assign indices to denote these sep-arated quantization regions/cells. Combining these regions fills the whole space of source distribution. The space-filling advantage of VQ against SQ is related to the sphere This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6101 Isotropic Gaussian Banana BoomerangNTC ECVQ Figure 2. Quantization results for NTC (SQ with nonlinear transform) and ECVQ (entropy-constrained VQ) on 2-d distributions. Blue lines represent quantization boundaries and orange points represent quantization centers (codewords). “Isotropic Gaussian” refers to a 2-d isotropic Gaussian distribution. “Banana” and “Boomerang” are two different 2-d distributions. It is observed that NTC cannot achieve the space-filling advantage (i.e. learning hexagon-like quantization cells for 2-d sources) even on an isotropic gaussian distribution. Moreover, NTC’s decorrelation capability is insufficient as source correlation becomes more complex. For example, quantization boundaries collide in the red circle of ”Boomerang”, leading to a performance drop. The corresponding BD-PSNR results are shown in Table 2. packing problem in geometry [14, 21, 41]. If we compare the quantization results of SQ and VQ, as shown in Fig-ure 2, we will find that even for a simple isotropic Gaus-sian distribution, SQ with nonlinear transform cannot learn to approximate hexagon-like quantization cells, where the hexagon is the polytope with the best space-filling proper-ties in 2-d space. Under the high-rate assumption, the gain of space-filling advantage is about 1.53 dB as the dimen-sion approaches infinity [15, 33]. Following this conclu-sion, we experimentally provide the BD-PSNR results by comparing SQ with nonlinear transform to VQ on isotropic Gaussian distributions in Table 1. In addition, to reduce the redundancies of data distributions, existing NTC meth-ods (SQ with nonlinear transform) rely on highly expen-sive nonlinear transform [12,44,45] and context-based auto-regressive entropy models [20,35]. However, different from NTC methods, VQ has superior decorrelation ability, which is known as the memory advantage of vector quantizers. This advantage is more obvious when quantizing complex source distributions, such as the Boomerang distribution in Figure 2 (especially in the red circle area).In this paper, we build a novel framework that applies modern neural networks to leverage the space-filling ad-vantages and memory advantages of VQ for image com-pression. We propose nonlinear vector transform coding (NVTC), which achieves encouraging rate-distortion per-formance with relatively low coding complexity. Specifi-cally, as shown in Figure 3, we introduce three key points to design a practical VQ, including 1) a multi-stage prod-uct VQ rather than a single-stage VQ to reduce the expo-nentially increased complexity, 2) nonlinear vector trans-form rather than scalar transform to remove redundancy be-tween sub-vectors with fewer parameters, and 3) entropy-constrained VQ rather than unconstrained VQ to achieve su-perior R-D optimality and joint optimization of latent-space VQ models. Forthe first point , many well-known VQ variants have been proposed in recent decades, such as product VQ [22, 39], multi-stage VQ [23], tree-structured VQ [11] and lat-tice VQ [17]. Although tree-structured and lattice VQ offer fast encoding speeds, they do not reduce the storage com-plexity of codebooks or entropy-coding frequency tables In 6102 Multi -stage Product VQSingle -stage VQFor VQ complexity For transform complexity ... --Complexity: : data dimension Complexity: : the number of stages : the number of subvectors: target rate + +-+ ... Entropy -constrained VQUnconstrained VQ whenFor R -D optimality Vector Transform (VT)Scalar Transform (ST) Parameters: Parameters:Intra Transform Inter Transform At boundaries:At boundaries: rate biasFigure 3. Three key points to design a practical vector quantizer. For VQ complexity (left), we suggest a hybrid VQ structure called multi-stage product VQ which reduces the VQ complexity from O(2KR)toO(ML 2KR ML). For transform complexity (middle), we use vector transform instead of scalar transform to remove inter-vector redundancy. For RD optimality (right), we find that ECVQ [13] is essential for the joint rate-distortion optimization, which is neglected in previous works [2, 31, 40, 43] . this paper, we suggest a hybrid VQ structure that incorpo-rates both product VQ and multi-stage VQ, as shown in the left column of Figure 3. The quantization procedure com-prises multiple stages, and each stage employs multiple in-dependent low-dimension quantizers to compress the sub-vectors of the input vector. As the number of stages and subvectors increases, the proposed multi-stage product VQ exhibits a significant decrease in complexity. While the intra-vector redundancy (i.e. the redundancy inside each subvector) can be removed by vector quantiza-tion, the inter-vector redundancy (i.e. the redundancy be-tween subvectors) is still overlooked. Therefore, our sec-ond point focuses on efficiently eliminating inter-vector re-dundancy. Transform VQ [25] introduces a linear trans-form for decorrelation and performs product quantization on the transformed coefficients. Similar coding structures are observed in recent learning-based VQ methods [2, 43], which are improved by learnable nonlinear transform with superior decorrelation capabilities. However, the transform used in these works is designed to decorrelate scalar compo-nents, which is computationally inefficient for vector decor-relation. The intra-vector redundancy, which is intended to be removed by VQ, might be partially reduced in ad-vance by the scalar transform. Therefore, certain parts of the scalar transform could be eliminated to improve com-putational efficiency. Motivated by the linear vector trans-form [28, 30, 31], we propose a new VT variant that de-couples a fully-connected scalar transform into two light-weight parts: intra-transform and inter-transform. In the middle of Figure 3, we provide a simple comparison be-tween the scalar transform and the proposed vector trans-form. We further stack the single-layer VT to build a pow-erful nonlinear vector transform. The differences between our VT and the linear VT are discussed in Section 4.1. Regarding the third point , we emphasize that the quan-tization process (either SQ or VQ) used in most previ-ous methods [2, 5, 31, 40, 43] (including VQV AE) is not entropy-constrained, which is theoretically suboptimal for rate-distortion performance. In the right of Figure 3, we provide a quantization illustration of unconstrained VQ and entropy-constrained VQ (ECVQ [13]), where uncon-strained VQ determines the quantization boundaries (blue line) using the nearest neighbor search. ECVQ introduces an additional rate bias−logpi λin the quantization process, which shifts the quantization boundaries from the high-probability region to the low-probability region. In other words, ECVQ search the codewords with the best RD performance, instead of just the neighboring codewords. ECVQ provides an optimal VQ encoding process described in Section 2. With the help of latent-space ECVQ, we de-6103 sign a training strategy for joint RD optimization. Instead of manually controlling the RD trade-off by varying codebook size [43], our model can learn layer-adaptive bit allocation. Our contributions can be summarized as 1) investigating on VQ advantages over SQ with nonlinear transform based on empirical results on some toy sources, 2) presenting a VQ-based coding scheme named nonlinear vector trans-form coding (NVTC) with three technical contributions that effectively leverages VQ while keeping complexity low, and 3) demonstrating that NVTC offers superior rate-distortion performance, faster decoding speed and smaller model size, compared with previous neural image codecs.
Akula_MetaCLUE_Towards_Comprehensive_Visual_Metaphors_Research_CVPR_2023
Abstract Creativity is an indispensable part of human cognition and also an inherent part of how we make sense of the world. Metaphorical abstraction is fundamental in commu-nicating creative ideas through nuanced relationships be-tween abstract concepts such as feelings. While computer vision benchmarks and approaches predominantly focus on understanding and generating literal interpretations of im-ages, metaphorical comprehension of images remains rel-atively unexplored. Towards this goal, we introduce Meta-CLUE, a set of vision tasks on visual metaphor. We also collect high-quality and rich metaphor annotations (ab-*Equal Contributionstract objects, concepts, relationships along with their cor-responding object boxes) as there do not exist any datasets that facilitate the evaluation of these tasks. We perform a comprehensive analysis of state-of-the-art models in vi-sion and language based on our annotations, highlight-ing strengths and weaknesses of current approaches in vi-sual metaphor c lassification, l ocalization, u nderstanding (retrieval, question answering, captioning) and ge neration (text-to-image synthesis) tasks. We hope this work pro-vides a concrete step towards developing AI systems with human-like creative capabilities. Project page: https: //metaclue.github.io This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23201
1. Introduction “Metaphor is pervasive in everyday life ... Our ordinary conceptual system, in terms of which we both think and act, is fundamentally metaphorical in nature. ” — Lakoff & Johnson [25] Creativity is a process of generating a new perspective on a problem or a situation. Metaphorical thinking has been recognized as a key and powerful mechanism of creativ-ity [24,29,50]. Humans engage metaphors in their creative thinking process as strategies to link or blend concepts, or to view a concept from a target domain in terms of another, apparently dissimilar concept from a source domain [25]. Metaphors also provide a sophisticated tool for nuanced hu-man communication. Let us take a closer look at the struc-ture of metaphors – and especially visual metaphors. Metaphors1are a cognitive construct in which a concept is compared to a seemingly unrelated concept via some shared attribute. Take as an example ‘This car is a chee-tah’, where ‘This car’ is compared to ‘a cheetah’ in terms of speed. Metaphors have a simple syntactic structure of ‘A is B’ where A is referred to as the primary concept and B as the secondary concept . The implied analogy in a metaphor is of the form: ‘(primary concept) is as (rela-tionship)2as (secondary concept)’ and often involves an at-tribute transfer from the secondary to the primary concept. Some examples include ‘This phone is as fast as a rocket’, ‘Cigarettes are as harmful as bullets’ etc. The primary and secondary concepts are usually unrelated at a glance, re-sulting in an element of surprise and creativity in metaphor-ical expressions. Despite following such simple structure, metaphors are quite powerful in conveying creative ideas. Metaphors are pervasive in all forms of communication, such as speech, text, visual etc. Visual Metaphors are images where the primary and sec-ondary concepts are visually depicted in an image con-veying the metaphorical message to the viewers. Visual metaphors are widely used in mass media communications like advertising and journalism [15, 43, 45]. In this work, we work with Ad images, as metaphors tend to be prevalent in ads. There are numerous ways a metaphor can be repre-sented visually. Following the classification in [15], there are at least 4 different types of visual metaphors. Fig. 2 shows sample images that belong to these types along with our annotations of primary, secondary concepts and their relationship. In contextual metaphors, either the primary or secondary concept is not explicitly visible, but is inferred from the context (e.g., apple in the left-most image). In hybrid metaphors, the primary and secondary concepts are 1Grammarians distinguish a metaphor “A is B” from a simile “A is like B”. In our work we use “metaphor” to encompass both variants. 2We use the word ‘relationship’ to denote the shared property of pri-mary and secondary concepts, usually adjectives or adjectival phrases.visually conflated. Juxtaposition forms one of the simplest visual metaphor types, where the two concepts are just pre-sented next to each other. Multimodal metaphors represent one of the concepts with another modality, such as text or logo. In practice, visual metaphors use multiple of these strategies to convey a metaphor in an effective manner. In many cases, the implied metaphorical meaning is somewhat open-ended. Interpretation of visual metaphors depends on several external factors, such as familiarity with the brands and cultural context. These visual variations and nuances make automatic cognition or generation of visual metaphors highly chal-lenging. While the last decade has seen rapid progress in many areas of understanding and generation tasks, prior works in computer vision focus heavily on literal interpreta-tion of images and overlook the importance of metaphorical reasoning in understanding the image message [1, 48]. We believe that developing AI systems with metaphorical com-prehension and generation capabilities can greatly assist hu-mans in creative endeavors involving conveying concepts in new and exciting ways. Such systems provide an important step towards conferring human-like creativity to AI models. To this end, we introduce multiple interesting tasks and construct metaphor annotations that enable comprehensive research on visual metaphors. As metaphors are more com-mon in visual Ads, we start with the Pitt’s Ads dataset im-ages [20] and then perform a rigorous multi-stage annota-tion process with expert annotators to filter metaphorical images, add metaphor annotations, and perform additional validation steps to clean the annotations3. While there is re-cent work making advances in understanding non-literal in-terpretations in natural language research [7, 12], this work proposes the first step towards metaphor analysis in images. As illustrated in Fig. 1, we perform comprehensive eval-uations with state-of-the-art techniques on four sets of tasks, which we call MetaCLUE : 1. C lassification: This is binary classification task of estimating whether a given image con-tains a metaphor or not. In other words, Are visual features indicative of whether there exists a metaphor in a given im-age or not? . 2. L ocalization: Here, the task is to localize the image regions that invoke the primary and secondary concepts in the viewer . This is similar to a standard object detection task, but is more complicated in the case of visual metaphors as the primary/secondary concepts may not be explicitly present in an image. 3. U nderstanding: Can our models understand the metaphorical message in a given im-age? We pose this understanding problem as 3 tasks where we can quantitatively measure the performance: Retrieval, Captioning and Visual question answering. 4. gE neration: Can we generate an image that conveys the metaphor, given the metaphorical message as a text prompt? 3The Ads, while useful for the purposes of our paper, some images may perpetuate harmful stereotypes according to characteristics such as gender. 23202 This beer is as tasty as a real apple. This car is as adventurous as a space ship.Contextual Driving this SUV is as smooth as birds flying in the sky. This pencil is as red as a fire truck.Hybrid This chocolate bar is as rich as gold. This car is as made for the beach as a crab.Juxtaposition These donuts are as unique as as talk-ing people. The car is as rugged as this muddy trailer.MultimodalFigure 2. Sample Visual Metaphors with their Annotations. There are different types of visual metaphors. The type depends on how the primary and secondary concepts are visually depicted. Here are sample Ad images from [20] where we annotated the primary concept, secondary concept and their relationship. We comprehensively evaluate existing state-of-the-art techniques for each of these tasks on our collected metaphor annotations. We evaluate the models both in a zero-shot manner as well as with finetuning on our annotations. Even though finetuning resulted in some improvements, most models struggle to produce satisfactory results in many cases, demonstrating the difficulty of these tasks. Our ex-periments highlight several strengths and weaknesses of the existing techniques on comprehending and generating vi-sual metaphors, providing a concrete first step towards fur-ther AI research on this fascinating topic.
Fang_You_Can_Ground_Earlier_Than_See_An_Effective_and_Efficient_CVPR_2023
AbstractGiven an untrimmed video, temporal sentence ground-ing (TSG) aims to locate a target moment semantically ac-cording to a sentence query. Although previous respectableworks have made decent success, they only focus on high-level visual features extracted from the consecutive de-coded frames and fail to handle the compressed videos forquery modelling, suffering from insufficient representationcapability and significant computational complexity duringtraining and testing. In this paper, we pose a new set-ting, compressed-domain TSG, which directly utilizes com-pressed videos rather than fully-decompressed frames asthe visual input. To handle the raw video bit-stream in-put, we propose a novel Three-branch Compressed-domainSpatial-temporal Fusion (TCSF) framework, which extractsand aggregates three kinds of low-level visual features (I-frame, motion vector and residual features) for effectiveand efficient grounding. Particularly, instead of encodingthe whole decoded frames like previous works, we capturethe appearance representation by only learning the I-framefeature to reduce delay or latency. Besides, we explore themotion information not only by learning the motion vectorfeature, but also by exploring the relations of neighboringframes via the residual feature. In this way, a three-branchspatial-temporal attention layer with an adaptive motion-appearance fusion module is further designed to extractand aggregate both appearance and motion informationfor the final grounding. Experiments on three challengingdatasets shows that our TCSF achieves better performancethan other state-of-the-art methods with lower complexity.1. IntroductionAs a significant yet challenging computer vision task,temporal sentence grounding (TSG) has drawn increasing⇤Equal contributions.†Corresponding author. (a) Example of temporal sentence grounding.Query: The man mixes up various ingredients and begins laying plaster on the floor. Video:Ground Truth (s):41.85119.10 CodecPrevious supervised TSG model Our compressed-domain TSG model Compressed VideoDecoded Frames I-frameP-frameI-frameP-frame(b) Comparison between previous model and our model.MVResMVResMV: motion vectorRes: residual...............Figure 1. (a) Example of the temporal sentence grounding (TSG).(b) Comparison between previous supervised TSG models and ourcompressed-domain TSG model. Previous models first decode thevideo into consecutive frames and then feed them into their net-works, while our compressed-domain model directly leverages thecompressed video as the visual input.attention due to its various applications, such as video un-derstanding [12–17,25,44,76,77,79–81,84] and temporalaction localization [63,71]. Given a long untrimmed video,the TSG task aims to locate the specific start and end times-tamps of a video segment with an activity that semanticallycorresponds to a given sentence query. As shown in Fig-ure1(a), most of video contents are query-irrelevant, whereonly a short video segment matches the query. It is sub-stantially more challenging since a well-designed methodneeds to not only model the complex multi-modal interac-tion among video and query, but also capture complicatedcontext information for cross-modal semantics alignment.By treating a video as a sequence of independent frames,most TSG methods [3,17,29,34,36–41,43,46,65,90,93,98]refer to the fully-supervised setting, where each frame isfirstly fully decompressed from a video bit-stream and thenmanually annotated as query-relevant or query-irrelevant. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2448 Despite the decent progress on the grounding performance,these data-hungry methods severely rely on the fully de-compression and numerous annotations, which are signif-icantly labor-intensive and time-consuming to obtain fromreal-word applications. To alleviate this dense reliance to acertain extent, some weakly-supervised works [8,11,16,33,35,49,51,62,64,96,97] are proposed to only leverage thecoarse-grained video-query annotations instead of the fine-grained frame-query annotations. Unfortunately, this weaksupervision still requires the fully-decompressed video forvisual feature extraction.Based on the above observation, in this paper, we makethe first attempt to explore if an effective and efficientTSG model can be learned without the limitation of thefully decompressed video input. Considering that the real-world video always stored and transmitted in a compresseddata format, we explore a more practical but challengingtask: compressed-domain TSG, which directly leveragesthe compressed video instead of obtaining consecutive de-coded frames as visual input for grounding. As shown inthe Figure1(b), a compressed video is generally parsed bya stream of Group of successive Pictures (GOPs) and eachGOP starts with one intra-frame (I-frame) followed by avariable number of predictive frames (P-frames) [30,74].Specifically, the I-frame contains complete RGB informa-tion of a video frame, while each P-frame contains a mo-tion vector and a residual. The motion vectors store 2D dis-placements between I-frame and its neighbor frames, andthe residuals store the RGB differences between I-frame andits reconstructed frame calculated by Motion Vectors in theP-frames after motion compensation. The I-frame can bedecoded itself, while these P-frames only store the changesfrom the previous I-frame by motion vectors and residuals.Given the compressed video, our main challenge is howto effectively and efficiently extract contextual visual fea-tures from the above three low-level visual information forquery alignment. Existing TSG works [3,29,34,39,65,90,93,98] cannot be applied directly to the compressed videobecause their video features (e.g.,C3D and I3D) can only beextracted if all complete video frames are available after de-compression. Moreover, decompressing all the frames willsignificantly increase computational complexity for featureextraction, leading to extra latency and extensive storage.To address this challenging task, we propose the firstand novel approach for compressed-domain TSG, calledThree-branch Compressed-domain Spatial-temporal Fusion(TCSF). Given a group of successive picture (GOP) in acompressed video, we first extract the visual features fromeach I-frame to represent the appearance at its timestamp,and then extract the features of its P-frames to capture themotion information near the I-frame. In this way, we canmodel the activity content with above simple I-frame and P-frames instead of using their corresponding consecutive de-coded frames. Specifically, we design a spatial attention anda temporal attention to integrate the appearance and motionfeatures for activity modelling. To adaptively handle differ-ent fast-motion (P-frame guided) or slow-motion (I-frameguided) cases, we further design an adaptive appearance andmotion fusion module to integrate the appearance and mo-tion information by learning a balanced weight through aresidual module. Finally, a query-guided multi-modal fu-sion is exploited to integrate the visual and textual featuresfor final grounding.Our contributions are summarized as follows:•We propose a brand-new and challenging task:compressed-domain TSG, which aims to directlyleverage the compressed video for TSG. To our bestknowledge, we make the first attempt to locate the tar-get segment in the compressed video.•We present a novel pipeline for compressed-domainTSG, which can efficiently and effectively integrateboth appearance and motion information from the low-level visual information in the compressed video.•Extensive experiments on three challenging datasets(ActivityNet Captions, Charades-STA and TACoS)validate the effectiveness and efficiency of our TCSF.2. Related WorksTemporal sentence grounding.Most existing TSG meth-ods are under the fully-supervised setting, where all video-query pairs and precise segment boundaries are manuallyannotated based on the fully-decompressed video. Thesemethods can be divided into two categories: 1) Proposal-based methods [1,5,45,87,94,95]: They first pre-definemultiple segment proposals and then align these proposalswith the query for cross-modal semantic matching based onthe similarity. Finally, the best proposal with the highestsimilarity score is selected as the predicted segment. Al-though achieving decent results, these proposal-based meth-ods severely rely on the quality of the segment proposalsand are time-consuming. 2) Proposal-free methods [7,42,53,88,92]: They directly regress the start and end boundaryframes of the target segment or predict boundary probabili-ties frame-wisely. Compared with the proposal-based meth-ods, proposal-free methods are more efficient. To alleviatethe reliance to a certain extent, some state-of-the-art turn tothe weakly-supervised setting [8,11,33,49,51,62,64,96,97],where only video-query pairs are annotated without precisesegment boundaries in the fully-decompressed video.In real-world computer vision tasks, we always collectthe compressed video, rather than decompressed consecu-tive frames. In this paper, we present a brand-new practicalyet challenging setting for T
SG task, called compressed-domain TSL, with merely compressed video rather than adecompressed frame sequence. 2449 Video compression.As a fundamental computer visiontask, video compression [26,27,32,48,57,72,75] dividesa video into a group of pictures (GOP), where each frameis coded as an I-, P-, and B-frame. An I-frame is the firstframe of the GOP to maintain full RGB pixels as an an-chor. The subsequent P-and B-frames are then coded usinga block-based motion vector with temporal prediction. Theprediction is conducted by searching the closest matchingblock of a previously coded frame as a reference frame. Avector of the current block to the reference block is deter-mined as a motion vector. Since the current block and thematching block are often different, the transformed residualis used to denote the difference.Compared with other deep features (e.g.,optical flow[24]) widely used in the TSG task, the compressed-domainfeatures (MVs and residual) have the following advan-tages: 1) Lower computational costs. The compressed-domain features can be obtained during decoding, whileother deep features need to decompress the compressedvideo and encode the video by a pretrained heavy-weightmodel (C3D [66] or I3D [4]). The compressed-domainfeatures only even require partial-frame reconstruction byentropy decoding [100], inverse transform and quantiza-tion [28], and motion-compensation [10]. In entropy de-coding, the most time-consuming process is skipping themotion-compensation [58], whose computational complex-ity is much smaller than that of other deep features. 2) Nodelay or dependency. The compressed-domain features canbe instantly obtained. When we large-scale datasets, the ad-vantages are more obvious.3. Proposed Method3.1. OverviewProblem statement.Given a video bit-streamVwithTframes, the temporal sentence grounding (TSG) task aims tolocalize the precise boundary(⌧s,⌧e)of a specific segmentsemantically corresponding to a given queryQ={qj}Mj=1,whereqjdenotes thej-th word,Mdenotes the word num-ber,⌧sand⌧edenote the start and end timestamps of thespecific segment. In our compressed-domain TSG setting,we do not feed the decompressed frames video as input.Instead, we partially decode the video bit-stream at a lowcost to extract the compressed video, which includesNgroup of pictures (GoPs). Each GoPGicontains one ref-erence I-frameIi2RH⇥W⇥3followed byLnumber ofP-frames{Pli}Ll=1. EachPliconsists of a motion vectorMli2RH⇥W⇥2and a residualRli2RH⇥W⇥3, which canbe extracted nearly cost-free fromV. For convenience, weassume that all GOPs contain the same number of P-frames.Thus,T=N⇥(L+ 1). The video bit-stream can be rep-resented asV={Ii,P1i,P2i,···,PLi}Ni=1, whereidenotesthei-th GOP. Here, the I-frame contains complete RGB in-formation of a video frame and can be decoded itself, whilethese P-frames only store the changes from the previous I-frame by motion vectors and residuals. The motion vec-tors store 2D displacements of the most similar patches be-tween I-frame and the target frame, and the residuals storepixel-wise differences to correct motion compensation er-rors. We use above three low-level information containedin compressed videos as our visual input.Pipeline.Our pipeline is summarized in Figure2. Givena video bit-stream, we first utilize the entropy decoding ap-proach [68,73] to generate a group of successive pictures(GOP), which consists of several I-frames with their relatedP-frames. Then, we extract the visual appearance featuresfrom I-frames by a pre-trained ResNet-50 network, while alight-weight ResNet-18 network is used to extract the mo-tion vector and residual features from P-frames. After that,we enrich these partial appearance and motion informationwith pseudo features to make the complete comprehensionof the full video. A spatial-temporal attention module is fur-ther introduced to better model the activity content based onthe motion-appearance contexts. Next, we design an adap-tive appearance and motion fusion module to selectivelyintegrate the attentive appearance and motion informationguided by the residual information. Finally, we design aquery-guided multi-modal fusion module to integrate thevisual and textual features for final grounding.3.2. Multi-Modal EncodingQuery encoder.Following [19], we first employ the Glovenetwork [55] to embed each word into a dense vector. Then,a Bi-GRU network [9] and a multi-head self-attention mod-ule [67] are used to further integrate the sequential textualrepresentations. Thus, final word-level features is denoteasQ={qj}Mj=12RM⇥d, wheredis the feature dimen-sion. By concatenating the outputs of the last hidden unit inBi-GRU with a further linear projection, we can obtain thesentence-level feature asqglobal2Rd.I-frame encoder.Following [31,50], if the{t}t=1T-thframe is I-frame, we use a pretrained ResNet-50 model [22]to extract its appearance featureat2RH⇥W⇥C, whereH,WandCdenotes dimensions of height, width, and channel.P-frame encoder.Following [59,73], if the{t}t=1T-thframe is P-frame containing a motion vectorMtand a resid-ualRt, we utilize a ResNet-18 network [22,78,79,82,83,85]to extract the motion vector featuremt2RH⇥W⇥Cand theresidual featurert2RH⇥W⇥C.Pseudo feature generation.Since our compressed-domainTSG needs to locate the specific start and end frames ofthe target segment, we need to obtain the precise motion,compensation and appearance information of each framefor more accurate grounding. However, in the compressedvideo, we only have partiallyN-number I-frames of appear-ance and(N⇥L)-number P-frames of motion and compen-sation, lacking enough full-frames (i.e.,T-number frames) 2450 Video package 101011...101011...Bit-streamCompressed domain The man mixes up various ingredients and begins laying plaster on the floor.QueryGlove & Bi-LSTMSelf-attentionT×1×1 Conv Relu Sigmoid3×3 Conv Relu3×3 Conv Relu1×1 Conv T×1×1 Conv Relu Sigmoid3×3 Conv Relu3×3 Conv Relu1×1 Conv T×1×1 Conv Relu Sigmoid3×3 Conv Relu3×3 Conv Relu1×1 Conv T×1×1 Conv Relu Sigmoid3×3 Conv Relu3×3 Conv Relu1×1 Conv T×1×1 Conv Relu Sigmoid3×3 Conv Relu3×3 Conv Relu1×1 Conv T×1×1 Conv Relu Sigmoid3×3 Conv Relu3×3 Conv Relu1×1 Conv MPMPMPAdaptive Motion-appearance FusionQuery-guided Multi-modal FusionGrounding Head I-frameMVRes ... ...... G1Gi......P-frameP-frameResNet-50I-frameResNet-18MV&Res ......aim1r1m2r2mLrL .........IiPi1Pi2PiL.........IiPi1Pi2PiLPseudo Feature Generation Spatial attentionfvAPFCReluFCSigmoidAPFCReluFCSigmoidAPFCReluFCSigmoidAPFCReluFCSigmoidAPFCReluFCSigmoidAPFCReluFCSigmoidTemporal attentionNoun feature by spaCy Word-level featureQuery-level featureAppearancefeatureMotionfeatureResidual featureAP: Average poolingFC: Fully-connected layerMP: Max poolingFeature encoder GroundingMVResFigure 2. Overview of the proposed architecture. Firstly, we leverage the entropy decoding approach to obtain the compressed video,i.e.,I-frames and P-frames (containing motion vectors and residuals). Then, we enrich their information with pseudo features, and develop athree-branch spatial-temporal attention to model the query-related activity content. After that, we fuse the appearance and motion contexts,and integrate them with the query features for learning the joint multi-modal representations. At last, we feed the multi-modal features intothe grounding head to predict the segment.knowledge of the complete appearance-motion information.Thus, we tend to generate complementary pseudo featuresfor the unseen frames of the video. For example, to warp theappearance feature from the current I-frame, we can useMtto estimate the pseudo appearance featureat+1in its adja-cent frame (its next frame). We can find that the pseudo fea-ture generation approach exempts reconstructing each adja-cent frame for feature extraction individually. We assumethat thet-frame is I-frame. For constructing the pseudo ap-pearance features of itsn-th adjacent P-frame, we utilize ablock-based motion estimation as:an+t(s)=an+t1(Mn+t1(s)+s),(1)wherean+tdenotes the appearance feature of then+t-th P-frame,sis a spatial coordinate of features, andisused as a scaling factor. By Eq. (1), we can obtain theappearance information of each P-frame based on off-the-shelf I-frames.Similarly, we will generate the motion information ofeach I-frame based on P-frames. Following [18], we com-bine the temporal movement information of appearance fea-tures in these adjacent frames. In the channel axis, weconcatenate consecutivenframes[at;···;an+t]asVt2RH⇥W⇥C⇥n. SettingVt⇤=conv1⇥1(Vt), we can getmt=ReLU(Vt⇤),(2)wheremtis the motion feature oft-th frame, ReLU is theReLU function, andconv1⇥1means1⇥1convolution layerwith stride 1, producing a channel dimension of featureC⇥ntoC. Thus, for thet-th frame, its appearance and motionfeatures areatandmt, respectively.3.3. Three-branch Spatial-temporal AttentionIn the TSG task, most of regions within a frame arequery-irrelevant, where only a few regions are query-relevant. To automatically learn the discriminative regionsrelevant to the query, we need to obtain the fine-grained lo-cal spatial context. Besides, the temporal context is alsoimportant since we can correlate the region-attentive spa-tial information in time series for precisely modelling theactivity. Therefore, we exploit previous-encoded three low-level features (appearance
Jeon_Genie_Show_Me_the_Data_for_Quantization_CVPR_2023
Abstract Zero-shot quantization is a promising approach for de-veloping lightweight deep neural networks when data is in-accessible owing to various reasons, including cost and is-sues related to privacy. By exploiting the learned parame-ters (µandσ) of batch normalization layers in an FP32-pre-trained model, zero-shot quantization schemes focus on generating synthetic data. Subsequently, they distill knowl-edge from the pre-trained model (teacher) to the quantized model (student) such that the quantized model can be op-timized with the synthetic dataset. However, thus far, zero-shot quantization has primarily been discussed in the con-text of quantization-aware training methods, which require task-specific losses and long-term optimization as much as retraining. We thus introduce a post-training quantiza-tion scheme for zero-shot quantization that produces high-quality quantized networks within a few hours. Further-more, we propose a framework called GENIE that gener-ates data suited for quantization. With the data synthesized byGENIE , we can produce robust quantized models with-out real datasets, which is comparable to few-shot quanti-zation. We also propose a post-training quantization algo-rithm to enhance the performance of quantized models. By combining them, we can bridge the gap between zero-shot ∗Equal contribution. Correspondence to: [email protected] few-shot quantization while significantly improving the quantization performance compared to that of existing ap-proaches. In other words, we can obtain a unique state-of-the-art zero-shot quantization approach. The code is available at https://github.com/SamsungLabs/ Genie .
1. Introduction Quantization is an indispensable procedure for deploy-ing models in resource-constrained devices such as mobile phones. By representing tensors using a lower bit width and maintaining a dense format of tensors, quantization reduces a computing unit to a significantly smaller size compared to that achieved by other approaches (such as pruning and low-rank approximations) and facilitates massive data par-allelism with vector processing units. Most early studies uti-lized quantization-aware training (QAT) schemes [8, 23] to compress models, which requires the entire training dataset and takes as much time as training FP32 models. However, access to the entire dataset for quantizing models may not be possible in the real world or industry owing to a variety of reasons, including issues related to privacy preservation. Thus, recent studies have emphasized post-training quanti-zation (PTQ) [12, 14, 17, 21] because it serves as a conve-nient method of producing high-quality quantized networks This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12064 Figure 2. Conceptual illustration of G ENIE , which consists of two sub-modules: synthesizing data and quantizing models with only a small amount of unlabeled datasets or even in the absence of a dataset (including synthetic datasets). Be-cause PTQ can compress models within a few hours but shows comparable performance to QAT, PTQ is preferred over QAT in practical situations. Zero-shot quantization (ZSQ) [4, 7, 19] is another re-search regime that synthesizes data to compress models without employing real datasets. Starting from DFQ [22], schemes for ZSQ gradually pay more attention to generat-ing elaborate replicas such that the distribution of interme-diate feature maps matches the statistics of the correspond-ing batch normalization layers. Although many studies have achieved significant advancement in regards to quantization in the absence of real data, most of them have relied on QAT schemes that require task-specific loss, such as cross-entropy (CE) loss and Kullback–Leibler (KL) divergence [16], which requires more than 10 hours to complete the quantization of ResNet-18 [10] on Nvidia V100. Excluding the data used, ZSQ and few-shot quanti-zation1(FSQ) commonly utilize FP32-pre-trained models (teacher ) to optimize quantized models ( student ) by distill-ing knowledge. It is possible that ZSQ and FSQ share the quantization algorithm regardless of whether the data are real or synthetic. We thus adopt an up-to-date PTQ scheme to ZSQ so that breaking away from the quantization scheme conventionally used in ZSQ and then completing quantiza-tion within a few hours. Based on the existing method, we propose a framework called G ENIE2that distill data suited for model quantization. We also suggest a novel quantiza-tion scheme, which is a sub-module of G ENIE and avail-able for both FSQ and ZSQ. As in Figure 2, G ENIE con-sists of two sub-modules: synthesizing data (G ENIE -D) and quantizing models (G ENIE -M). By combining them, we bridge the gap between ZSQ and FSQ while taking an ultra-step forward from existing approaches. In other words, we achieve a state-of-the-art result that is unique among ZSQ approaches. Our contributions are summarized as follows: • First, we propose a scheme for synthesizing datasets by combining the approaches related to generation and dis-tillation to take advantage of both approaches. 1This refers to post-training quantization with few real data 2Data gen eration scheme sui ted for quantization• Second, we suggest a method to substitute convolution of stride n(n > 1) by swing convolution . By applying randomness, various spatial information can be utilized when distilling datasets. • Finally, we propose a new quantization scheme as a sub-module of G ENIE (available for both FSQ and ZSQ), which is a simple but effective method that jointly op-timizes quantization parameters.
Fang_EVA_Exploring_the_Limits_of_Masked_Visual_Representation_Learning_at_CVPR_2023
Abstract We launch EVA , a vision-centric foundation model to Explore the limits of Visual representation at sc Ale using only publicly accessible data. EVA is a vanilla ViT pre-trained to reconstruct the masked out image-text aligned vision features conditioned on visible image patches. Via this pretext task, we can efficiently scale up EVA to one billion parameters, and sets new records on a broad range of representative vision downstream tasks, such as image recognition, video action recognition, object detection, in-stance segmentation and semantic segmentation without heavy supervised training. Moreover, we observe quanti-tative changes in scaling EVA result in qualitative changes in transfer learning performance that are not present in other models. For instance, EVA takes a great leap in the challeng-ing large vocabulary instance segmentation task: our model achieves almost the same state-of-the-art performance on LVIS dataset with over a thousand categories and COCO dataset with only eighty categories. Beyond a pure vision en-coder, EVA can also serve as a vision-centric, multi-modal pivot to connect images and text. We find initializing the vision tower of a giant CLIP from EVA can greatly stabi-lize the training and outperform the training from scratch counterpart with much fewer samples and less compute, pro-viding a new direction for scaling up and accelerating the costly training of multi-modal foundation models.
1. Introduction Scaling up pre-trained language models (PLMs) [9,64,76] has revolutionized natural language processing (NLP) in the past few years. The key to this success lies in the simple and scalable self-supervised learning task of masked signal †Interns at Beijing Academy of Artificial Intelligence (BAAI). ‡Corresponding authors: Yue Cao ( [email protected] ), Xin-long Wang ( [email protected] ) and Xinggang Wang ([email protected] ).prediction [29, 74], with which Transformer models [99] could be scaled up to billions of parameters using nearly unlimited unlabelled data, and generalize well to a wide range of downstream tasks with little tuning. With further scaling on compute, data, and model sizes, PLMs have led to not only continuous performance improvements [50, 75, 76], but also a surprising emergence of in-context learning capability [9, 25, 104, 105]. Motivated by the success of model scaling in NLP, it is ap-pealing that we can also translate this success from language to vision, i.e., to scale up a vision-centric foundation model that is beneficial for both vision & multi-modal downstream tasks. Recently, masked image modeling (MIM) [5, 39, 113] has boomed as a viable approach for vision model pre-training and scaling. However, the most competitive billion-sized vision pre-trained models [31, 65, 71, 119] still heavily rely on supervised or weakly-supervised training with hun-dreds of millions of (often publicly inaccessible) labeled data. MIM is somewhat only adopted as an initialization stage before the heavily supervised pre-training [65], or a pure MIM pre-trained model could not achieve favorable performance at billion-scale model sizes [114]. We regard this gap stems from the fact that natural images are raw and information-sparse. Meanwhile, an ideal vision pretext task needs the abstraction of not only the low-level geometry & structure information, but also high-level semantics, which is hardly captured by pixel-level recovery tasks [112]. In this work, we seek a suitable MIM pretext task for large scale vision representation learning and explore its limits at the scale of one billion parameters with tens of millions of unlabeled data. Recently, there are a few trials leveraging the semantic information from image-image or image-text contrastive learning [13, 22, 73] for MIM pre-training [43, 106, 124], which perform fairly well in vision downstream tasks. However, there remains a debate that (i) tokenized semantic features could provide better supervision signal for masked modeling in vision [5, 70, 101], and (ii) good performances could be also achieved via a simple post-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19358 image & video classification ( /github) object detection (det) & instance segmentation (seg) semantic segmentation model IN-1K ftIN-1K linIN-1K zs avg. zs K400 K600 K700 COCO det ( test /val) COCO seg ( test /val) LVIS seg COCO-Stuff ADE20K Florence —-—-—-—-86.5 87.8 —-62.4e/ 62.0 62.4e-62.0 —-—-—-SwinV2-G —-—-—-—-86.8 —-—-63.1e/ 62.5 54.4e/ 53.7 —-—-59.9 prev. best 89.6a82.3b78.0c73.1c87.8d88.3e80.4e64.5f/ 64.2g55.4h/ 54.5i49.2j52.3k62.8a EV A 89.7 (+0.1)86.5 (+4.2)78.5 (+0.5)75.7 (+2.6)89.7 (+1.9)89.8 (+1.5)82.9 (+2.5)64.7e/64.5 (+0.2/+0.3 )55.5e/55.0 (+0.1/+0.5 )55.0 (+5.8)53.4 (+1.1)62.3 (-0.5) Table 1. Summary of EV A performance on various mainstream vision benchmarks. EV A is performant compared with previous best / leading approaches. “/github”: methods / results that only exploit publicly accessible data / academic resources. “ft”: end-to-end fine-tuning. “lin”: linear probing. “zs”: zero-shot classification. “avg. zs”: averaged zero-shot classification performance on 8 image and 4 video datasets with contrastive language-image pre-training. (timestamp: Nov 10, 2022) methods / results reference. a: BEiT-3 [101], b: iBOT [124], c: Open CLIP-H [47], d: Text4Vis [109], e: MaskFeat [103], f: Group DETRv2 [19], g: FocalNet [116], h: FD-SwinV2-G [107], i: Mask DINO [57], j: LVIS 2021 competition 1st[35], k: ViT-Adapter [23]. distillation process without masked prediction tasks [107]. Through a pilot empirical study, we find that simply us-ing image-text aligned ( i.e., CLIP [73]) vision features as the prediction targets in MIM scales up well and achieves satisfactory performances on a broad range of downstream benchmarks. This pre-training task draws the benefits from both the high-level semantic abstraction of image-text con-trastive learning as well as the good capture of geometry & structure in masked image modeling, which typically covers the information needed for most visual perception tasks. Via this MIM pretext task, we can efficiently scale up a vanilla ViT encoder [31], dubbed EV A , to one billion param-eters with strong visual representations that transfers well to a wide range of downstream tasks. Using 29.6 million pub-lic accessible unlabeled images for pre-training, EV A sets new records on several representative vision benchmarks, such as image classification on ImageNet-1K [28] (89.7% top-1 accuracy) , object detection and instance segmentation on LVIS [38] (62.2 APbox& 55.0 APmaskonval)and COCO [62] (64.5 APbox& 55.0 APmaskonval, 64.7 APbox& 55.5 APmaskontest -dev), ImageNet-1K ADE20K tokenize? [70] pt epochs top-1 acc. mIoUss ✗ -85.0 52.6 ✓ 300 85.0 52.7 ✓ 1600 85.5 53.1 ✗ 800 85.5 53.3 (a)(Additional) semantic feature tokenization is not required for achieving good downstream performance. ImageNet-1K ADE20K distill.? [107] pt epochs top-1 acc. mIoUss ✗ -85.0 52.6 ✓ 300 85.1 52.5 ✓ 800 85.1 52.7 ✗ 800 85.5 53.3 (b)Feature distillation fails to achieve consistent performance gain as the pre-training becomes longer. Table 2. Pilot experiment . We evaluate different pre-training approaches using ViT-B and report their performance on ImageNet-1K image classification (top-1 accuracy) and ADE20K semantic segmentation (single-scale mIoU). Numbers in grey refer to the results of directly fine-tuning CLIP vision encoder on correspond-ing downstream tasks. Default settings for EV A pre-training are marked in purple ,i.e., directly regressing the masked out CLIP vision features conditioned on visible image patches.semantic segmentation on COCO-stuff [11] (53.4 mIoUss)and ADE20K [123] (62.3 mIoUms), and video action recognition on Kinetics-400 [51] (89.7% top-1 accuracy) , Kinetics-600 [14] (89.8% top-1 accuracy) , Kinetics-700 [15] (82.9% top-1 accuracy) . Notably, different from other state-of-the-art billion-scale vision foundation models that demand tens of millions of or even billions of labeled images, such as SwinV2-G us-ing ImageNet-21K-ext-70M [65] and ViT-g/G using JFT-3B [119], EV A does not need a costly supervised training stage and only leverage images from open-sourced datasets for academic reproducibility. Moreover, we observe quantitative changes in scaling EV A result in qualitative changes in transfer learning perfor-mance that are not observed in other smaller-scale models, e.g.,EV A makes a significant breakthrough in the challeng-ing large vocabulary object-level recognition task: our model achieves almost the same performance on LVIS [38], an in-stance segmentation benchmark with more than 1,200 cate-gories, as COCO [62], which almost shares the same image set as LVIS but with only 80 categories annotated. This emergent ability well matches the expectation of model scal-ing [105], that larger capability of model results in not only predictable performance improvements on standard bench-marks, but also unpredictable phenomenons and capabilities for resolving more challenging tasks. Going beyond a pure vision encoder, EV A can also serve as a vision-centric, multi-modal pivot that builds a bridge between vision and language. We show that initializing the image encoder via pre-trained EV A in a 1.1 billion parame-ters CLIP model can outperform the training from scratch counterpart on a broad range of zero-shot image / video clas-sification benchmarks with much fewer samples and less compute. Moreover, EV A can greatly stabilize the giant CLIP’s training & optimization process. Since large CLIP models usually suffer from training instability and ineffi-ciency issues [2, 47], we hope our solution opens up a new direction for scaling up and accelerating the costly training of multi-modal foundation models. By scaling up vision-centric foundation models with MIM pre-training to achieve strong performance on broad down-stream tasks, we hope EV A would bridge the gap between vision and language with masked signal modeling, and con-tributes to the big convergence across different modalities. 19359 patch size #layers hidden dim mlp dim attn heads #param. 14×14 40 1408 6144 16 1011M (a)EV A architecture configurations.dataset total size ImageNet-21K, CC12M, CC3M, Object365, COCO, ADE 29.6M images (b) datasets for pre-training EV A . image size batch size optimizer peak lr ( β1,β2) pt epochs 22424096 AdamW 1e-3 (0.9, 0.98) 150 (c) some pre-training settings and hyper-parameters.precision ZeRO #gpus samples / sec. max mem. pt days fp16 stage-1 128 ∼3150 ∼26.5GB ∼14.5 (d) basic statistics of EV A pre-training. Table 3. A brief summary of pre-training settings and configurations for EV A.
Ai_HRDFuse_Monocular_360deg_Depth_Estimation_by_Collaboratively_Learning_Holistic-With-Regional_Depth_CVPR_2023
Abstract Depth estimation from a monocular 360◦image is a bur-geoning problem owing to its holistic sensing of a scene. Recently, some methods, e.g., OmniFusion, have applied the tangent projection (TP) to represent a 360◦image and predicted depth values via patch-wise regressions, which are merged to get a depth map with equirectangular pro-jection (ERP) format. However, these methods suffer from 1) non-trivial process of merging plenty of patches; 2) cap-turing less holistic-with-regional contextual information by directly regressing the depth value of each pixel. In this paper, we propose a novel framework, HRDFuse , that sub-tly combines the potential of convolutional neural networks (CNNs) and transformers by collaboratively learning the holistic contextual information from the ERP and the re-gional structural information from the TP . Firstly, we pro-pose a spatial feature alignment ( SFA) module that learns feature similarities between the TP and ERP to aggregate the TP features into a complete ERP feature map in a pixel-wise manner. Secondly, we propose a collaborative depth distribution classification ( CDDC ) module that learns the holistic-with-regional histograms capturing the ERP and TP depth distributions. As such, the final depth values can be predicted as a linear combination of histogram bin cen-ters. Lastly, we adaptively combine the depth predictions from ERP and TP to obtain the final depth map. Extensive experiments show that our method predicts more smooth and accurate depth results while achieving favorably bet-terresults than the SOTA methods. Multimedia Material For videos, code, demo and more information, you can visit https://VLIS2022.github.io/HRDFuse/
1. Introduction The 360◦camera is becoming increasingly popular as a 360◦image provides holistic sensing of a scene with a wide *Corresponding author (e-mail: [email protected]) Figure 1. (a) Our HRDFuse employs the SFA module to align the regional information in discrete TP patches and holistic informa-tion in a complete ERP image. The CDDC module is proposed to estimate ERP format depth outputs from both the ERP image and TP patches based on holistic-with-regional depth histograms. (b) Compared with OmniFusion [30], our depth predictions are more smooth and more accurate. field of view (FoV) [1,4,19,44,48,52]. Therefore, the abil-ity to infer the 3D structure of a 360◦camera’s surroundings has sparked the research for monocular 360◦depth estima-tion [23, 36, 43, 45]. Generally, raw 360◦images are trans-mitted into 2D planar representations while preserving the omnidirectional information [12, 50]. Equirectangular pro-jection (ERP) is the most commonly used projection for-mat [38, 49] and can provide a complete view of a scene. Cubemap projection (CP) [9] projects 360◦contents into six discontinuous faces of a cube to reduce the distortion; thus, the pre-trained 2D convolutional neural networks (CNNs) can be applied. However, ERP images suffer from severe distortions in the polar regions, while CP patches are ham-pered by geometric discontinuity and limited FoV . For this reason, some works [53, 54] have proposed distortion-aware convolution filters to tackle the ERP dis-tortion problem for depth estimation. BiFuse [43] and Uni-Fuse [23] explore the complementary information from the ERP image and CP patches to predict the depth map. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13273 Recently, research has shown that it is promising to use tangent projection (TP) because TP patches have less dis-tortion, and many pre-trained CNN models designed for perspective images can be directly applied [16]. How-ever, there exist unavoidable overlapping areas between two neighbouring TP patches, as can be justified by the geomet-ric relationship in Fig. 2. Therefore, directly re-projecting the results from TP patches into the ERP format is compu-tationally complex. Accordingly, 360MonoDepth [34] pre-dicts the patch-wise depth maps from a set of TP patches using the state-of-the-art (SOTA) perspective depth estima-tors, which are aligned and merged to obtain an ERP format depth map. OmniFusion [30] proposes a framework lever-aging CNNs and transformers to predict depth maps from the TP inputs and merges these patch-wise predictions to the ERP space based on geometric prior information to get the final depth output with ERP format. However, these meth-ods suffer from two critical limitations because: 1) geomet-rically merging a large number of patches is computation-ally heavy; 2) they ignore the holistic contextual informa-tion contained only in the ERP image and directly regress the depth value of each pixel, leading to less smooth and accurate depth estimation results. To tackle these issues, we propose a novel framework, called HRDFuse , that subtly combines the potential of con-volutional neural networks (CNNs) and transformers by collaboratively exploring the holistic contextual informa-tion from the ERP and regional structural information from the TP (See Fig. 1(a) and Fig. 3). Compared with previ-ous methods, our method achieves more smooth and more accurate depth estimation results while maintaining high ef-ficiency with three key components. Firstly, for each pro-jection, we employ a CNN-based feature extractor to extract spatially consistent feature maps and a transformer encoder to learn the depth distribution with long-range feature de-pendencies. In particular, to efficiently aggregate the in-dividual TP information into an ERP space, we propose a spatial feature alignment ( SFA) module to learn a spatially aligned index map based on feature similarities between ERP and TP. With this index map, we can efficiently mea-sure the spatial location of each TP patch in the ERP space and achieve pixel-level fusion of TP information to obtain a smooth output in ERP format. Secondly, we propose a col-laborative depth distribution classification ( CDDC ) module to learn the holistic depth distribution histogram from the ERP image and regional depth distribution histograms from the collection of TP patches. Consequently, the pixel-wise depth values can be predicted as a linear combination of his-togram bin centers. Lastly, the final result is adaptive fused by two ERP format depth predictions from ERP and TP. We conduct extensive experiments on three bench-mark datasets: Stanford2D3D [2], Matterport3D [7], and 3D60 [54]. The results show that our method can achieve Figure 2. Geometric relationship between TP and ERP. Two TP patches are projected from the red area and yellow area. more smooth and more accurate depth results while favor-ably surpassing the existing methods by a significant margin on 3D60 and Stanford2D3D datasets (See Fig. 1 and Tab. 1). In summary, our main contributions are four-fold: ( I) We propose HRDFuse that combines the holistic contextual in-formation from the ERP and regional structural information from the TP. ( II) We introduce the SFA module to efficiently aggregate the TP features into the ERP format, relieving the need for expensive re-projection operations. ( III) We pro-pose the CDDC module to learn the holistic-with-regional depth distributions and estimate the depth value based on the histogram bin centers.
Christen_Learning_Human-to-Robot_Handovers_From_Point_Clouds_CVPR_2023
Abstract We propose the first framework to learn control policies for vision-based human-to-robot handovers, a critical task for human-robot interaction. While research in Embodied AI has made significant progress in training robot agents in simulated environments, interacting with humans remains challenging due to the difficulties of simulating humans. Fortunately, recent research has developed realistic simu-lated environments for human-to-robot handovers. Lever-aging this result, we introduce a method that is trained with a human-in-the-loop via a two-stage teacher-student frame-work that uses motion and grasp planning, reinforcement learning, and self-supervision. We show significant per-formance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer. Video and code are available at https://handover-sim2real.github.io .
1. Introduction Handing over objects between humans and robots is an important tasks for human-robot interaction (HRI) [35]. It *This work was done during an internship at NVIDIA.allows robots to assist humans in daily collaborative activi-ties, such as helping to prepare a meal, or to exchange tools and parts with human collaborators in manufacturing set-tings. To complete these tasks successfully and safely, in-tricate coordination between human and robot is required. This is challenging, because the robot has to react to human behavior, while only having access to sparse sensory inputs such as a single camera with limited field of view. There-fore, a need for methods that solve interactive tasks such as handovers purely from vision input arises. Bootstrapping robot training in the real world can be un-safe and time-consuming. Therefore, recent trends in Em-bodied AI have focused on training agents to act and interact in simulated (sim) environments [11, 12, 19, 43, 45, 46, 51]. With advances in rendering and physics simulation, models have been trained to map raw sensory input to action out-put, and can even be directly transferred from simulation to the real world [2, 42]. Many successes have been achieved particularly around the suite of tasks of robot navigation, manipulation, or a combination of both. In contrast to these areas, little progress has been made around tasks pertained to HRI. This is largely hindered by the challenges in em-bedding realistic human agents in these environments, since This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9654 modeling and simulating realistic humans is challenging. Despite the challenges, an increasing number of works have attempted to embed realistic human agents in simu-lated environments [6, 9, 16, 36–38, 48]. Notably, a recent work has introduced a simulation environment (“Handover-Sim”) for human-to-robot handover (H2R) [6]. To ensure a realistic human handover motion, they use a large motion capture dataset [7] to drive the movements of a virtual hu-man in simulation. However, despite the great potential for training robots, the work of [6] only evaluates off-the-shelf models from prior work, and has not explored any policy training with humans in the loop in their environment. We aim to close this gap by introducing a vision-based learning framework for H2R handovers that is trained with a human-in-the-loop (see Fig. 1). In particular, we propose a novel mixed imitation learning (IL) and reinforcement learning (RL) based approach, trained by interacting with the humans in HandoverSim. Our approach draws inspira-tion from a recent method for learning polices for grasping static objects from point clouds [50], but proposes several key changes to address the challenges in H2R handovers. In contrast to static object grasping, where the policy only requires object information, we additionally encode human hand information in the policy’s input. Also, compared to static grasping without a human, we explicitly take human collisions into account in the supervision of training. Fi-nally, the key distinction between static object grasping and handovers is the dynamic nature of the hand and object dur-ing handover. To excel on the task, the robot needs to react to dynamic human behavior. Prior work typically relies on open-loop motion planners [49] to generate expert demon-strations, which may result in suboptimal supervision for dynamic cases. To this end, we propose a two-stage training framework. In the first stage, we fix the humans to be sta-tionary and train an RL policy that is partially guided by ex-pert demonstrations obtained from a motion and grasp plan-ner. In the second stage, we finetune the RL policy in the original dynamic setting where the human and robot move simultaneously. Instead of relying on a planner, we propose a self-supervision scheme, where the pre-trained RL policy serves as a teacher to the downstream policy. We evaluate our method in three “worlds” (see Fig. 1). First, we evaluate on the “native” test scenes in Handover-Sim [6], which use the same backend physics simulator (Bullet [10]) as training but unseen handover motions from the simulated humans. Next, we perform sim-to-sim evalua-tion on the test scenes implemented with a different physics simulator (Isaac Gym [29]). Lastly, we investigate sim-to-real transfer by evaluating polices on a real robotic system and demonstrate the benefits of our method. We contribute: i) the first framework to train human-to-robot handover tasks from vision input with a human-in-the-loop, ii) a novel teacher-student method to train in thesetting of a jointly moving human and robot, iii) an em-pirical evaluation showing that our approach outperforms baselines on the HandoverSim benchmark, iv) transfer ex-periments indicating that our method leads to more robust sim-to-sim and sim-to-real transfer compared to baselines.
Choudhuri_Context-Aware_Relative_Object_Queries_To_Unify_Video_Instance_and_Panoptic_CVPR_2023
Abstract Object queries have emerged as a powerful abstraction to generically represent object proposals. However, their use for temporal tasks like video segmentation poses two questions: 1) How to process frames sequentially and prop-agate object queries seamlessly across frames. Using inde-pendent object queries per frame doesn’t permit tracking, and requires post-processing. 2) How to produce tempo-rally consistent, yet expressive object queries that model both appearance and position changes. Using the entire video at once doesn’t capture position changes and doesn’t scale to long videos. As one answer to both questions we propose ‘context-aware relative object queries’, which are continuously propagated frame-by-frame. They seamlessly track objects and deal with occlusion and re-appearance of objects, without post-processing. Further, we find context-aware relative object queries better capture position changes of objects in motion. We evaluate the proposed approach across three challenging tasks: video instance segmentation, multi-object tracking and segmentation, and video panoptic segmentation. Using the same approach and architecture, we match or surpass state-of-the art results on the diverse and challenging OVIS, Youtube-VIS, Cityscapes-VPS, MOTS 2020 and KITTI-MOTS data.
1. Introduction Video instance segmentation (VIS) [56] and Multi-Object Tracking and Segmentation (MOTS) combines segmentation and tracking of object instances across frames of a given video, whereas video panoptic segmentation (VPS) requires to also pixel-wise categorize the entire video semantically. These are challenging tasks because objects are occasion-ally partly or entirely occluded, because the appearance and position of objects change over time, and because objects may leave the camera’s field of view only to re-appear at a later time. Addressing these challenges to obtain an accurate method for the aforementioned tasks that works online is important in fields like video editing, autonomous systems,and augmented as well as virtual reality, among others. Classically, VIS or MOTS treat every frame or clip in a video independently and associate the predictions temporally via a post-processing step [1, 3, 4, 6, 12, 19, 35, 41, 50, 56, 57]. Many of these approaches are based on object proposal gen-eration, that are used in classical detection methods [16, 43]. For image detection and segmentation, recently, query-vectors have been shown to encode accurate object propos-als [7, 9, 10]. These query-vector-based object proposals are more flexible than classical object proposals because they are not axis-aligned but rather feature-vector based. Using these accurate query vectors for images, recent methods on VIS [22, 52] adopt the classical method of operating frame-by-frame independently, followed by a post-processing step for associating the query vectors temporally based on their similarity. It remains unclear how the query-vector-based object proposals can be seamlessly extended to the temporal domain. Some recent transformer-based works [8, 25, 49, 51] use global object queries to process entire videos at once offline, but these methods fail to scale to long videos. However, intuitively, offline approaches should be more accurate than online methods since they operate with a much larger tem-poral context. Surprisingly, this is not the case. The best methods on VIS [18,22,52] produce query vectors frame-by-frame independently, raising the question why global query vectors fail to accurately represent objects spatio-temporally. We study this carefully and observe that the query vectors are often too reliant on the static spatial positions of objects in a few frames. They hence fail to encode the position changes well. This over-reliance of query vectors on spatial positions has not been observed before in the context of video seg-mentation. How to address this remains an open question. It also remains unclear how the query-vector-based object proposals can be extended to the temporal domain, while keeping the processing of frames sequential. In a first attempt to sequentially propagate object queries, the problem of multi-object tracking was studied [38, 44, 58]. These works use separate, distinct queries to represent existing object tracks and new objects. New object queries This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6377 Figure 1. An example from the KITTI-MOTS dataset showing the need for context-aware relative object queries. Object queries from Mask2Former-VIS [8] (top row) heavily rely on the spatial positions of objects, hence can’t reason about the position-changes of the cars in the scene. The green car in the first frame is mistaken as the red car when the original red car leaves the scene and the green car takes its spatial position. Similarly, the yellow car is first mistaken as the green car and later as the red car. Cyan boxes in the top row indicate the identity switches. Our method (bottom row) is able to retain identities of the cars despite their significant motion. are initialized each frame. However, it remains unclear how to seamlessly unify 1) the new object queries, and 2) track queries, while avoiding heuristic post-processing. Different from prior work, we develop a simple approach which propagates object queries frame-by-frame while si-multaneously refining queries via a transformer decoder. Intuitively, the query-vectors in the proposed approach rep-resent all objects of interest in a video without the need to introduce new object queries every frame. Instead, queries are activated if the objects they represent appear in a frame. A continuous refinement of the query-vectors permits to ad-just to gradual appearance changes. Their propagation across frames helps them carry long-term temporal information, so that they can seamlessly handle long-term occlusions or ab-sence from the camera field-of-view. While studying why global object queries are sub-optimal at encoding position changes of objects, we observed that the use of absolute po-sition encodings during self-and cross-attention causes the object queries to heavily rely on the object positions in a few frames, as illustrated in the top row of Fig. 1. To address this, we use relative positional encodings (inspired from [13]) instead of absolute encodings. The ‘relative object queries’ (queries with relative positional encodings) better encode the position changes of objects (bottom row of Fig. 1). Moreover, we use spatio-temporal context (image features from previ-ous frames and the current frame) to modulate the object queries in the transformer decoder, making them ‘context-aware.’ This permits to more holistically reason about the current frame without losing spatio-temporal details. We evaluate the proposed approach on the challenging VIS, VPS and MOTS tasks. We outperform methods that reason about an entire video at once by 5%and11% on the challenging OVIS data using the Resnet-50 and Swin-L backbones. We perform similar to image or clip-based online methods which rely heavily on post-processing. We also outperform or perform close to the state-of-the-art onthe Youtube-VIS, Cityscapes VPS, MOTS 2020, and KITTI-MOTS data, demonstrating generalizability of the approach to video segmentation tasks.
Chen_Elastic_Aggregation_for_Federated_Optimization_CVPR_2023
Abstract Federated learning enables the privacy-preserving train-ing of neural network models using real-world data across distributed clients. FedAvg has become the preferred opti-mizer for federated learning because of its simplicity and effectiveness. FedAvg uses naïve aggregation to update the server model, interpolating client models based on the num-ber of instances used in their training. However, naïve ag-gregation suffers from client drift when the data is heteroge-nous (non-IID), leading to unstable and slow convergence. In this work, we propose a novel aggregation approach, elastic aggregation, to overcome these issues. Elastic ag-gregation interpolates client models adaptively according to parameter sensitivity, which is measured by computing how much the overall prediction function output changes when each parameter is changed. This measurement is performed in an unsupervised and online manner. Elastic aggregation reduces the magnitudes of updates to the more sensitive pa-rameters so as to prevent the server model from drifting to any one client distribution, and conversely boosts updates to the less sensitive parameters to better explore different client distributions. Empirical results on real and synthetic data as well as analytical results show that elastic aggregation leads to efficient training in both convex and non-convex settings while being fully agnostic to client heterogeneity and robust to large numbers of clients, partial participation, and imbalanced data. Finally, elastic aggregation works well with other federated optimizers and achieves significant improvements across the board.
1. Introduction Unlike traditional centralized learning in which models are trained using large datasets stored in a central server [15], federated learning -first proposed in [40] -leverages data spread across many clients to learn classification tasks dis-*Equal contribution. †Corresponding author. This work is supported in part by NSFC Grants (62072449).tributively without explicitly sharing data [22, 26, 27, 42], thereby ensuring a basic level of privacy. Federated learning is characterized by four key features: •Unreliable links : The links connecting the server and clients can be unreliable, and only a small subset of clients may be active at any given time. •Massive distribution : The number of clients is typically high, but the amount of data per client is relatively small. •Substantial heterogeneity : Client data is heterogeneous and non-IID [26], meaning that data across different clients can be sampled from varying regions of the sampling space. •Imbalanced data : There can be significant imbalances in the amount of data available per client. The most popular algorithm for federated learning is Fe-dAvg [40], which tackles the communication bottleneck by performing multiple local updates on the available clients before communicating the overall change to the server. Fe-dAvg uses naïve aggregation to interpolate client models and has shown success in certain applications. However, its performance on heterogeneous data is still an active area of research [16, 25, 33]. According to [25], training models on local data that minimize local empirical loss appears to be meaningful, but yet, doing so is fundamentally inconsistent with minimizing the global empirical loss . Client updates drive the server model away from the ideal distribution, a phenomenon known as ’client drift’. Naïve aggregation [40] is efficient in aggregating client models but does not account for distribution inconsistencies across client data or the con-sequent objective inconsistency. In other words, with naïve aggregation, the server model risks converging to a station-ary point of a mismatched objective function which can be arbitrarily different from the true objective [53]. Prior works [24, 40, 48] attempt to overcome this issue by running fewer epochs or iterations of SGD on the devices or by stabilizing server-side updates so that the resulting models correspond to inexact minimizations and keep globally desir-able properties. In this work, we propose a novel aggregation This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12187 !"#!!"!!""" "!""#!#!!Minima pointLow empirical loss for client ALow empirical loss for client BLow empirical loss for serverNaïve aggregationElastic aggregation Client update!"#!!"!!""" "!""#!#!!Minima pointLow empirical loss for client ALow empirical loss for client BLow empirical loss for serverNaïve aggregationElastic aggregation Client updateFigure 1. Illustration of naïve aggregation and elastic aggregation. The local updates of client A and client B drive the server model θ towards their individual minima (black dots in plot). Naïve aggregation simply averages the received model from clients A and B, yielding θ′as the new server model. Although θ′minimizes the local empirical loss of clients A and B, θ′drifts from ideal distribution for the server model. Elastic aggregation adjusts gradient with respect to parameter sensitivity. Parameter θxis more sensitive (has a larger gradient norm), and is restricted with ζx<1to reduce the magnitude of its update. Parameter θyis less sensitive (has a smaller gradient norm), and is correspondingly boosted with ζy>1to better explore the parameter space. This minimizes the loss for clients A and B, while not causing the server model to drift from its ideal distribution. Hence, elastic aggregation results in a better update θ′′. approach, elastic aggregation, to overcome client drift. We measure parameter sensitivity using unlabeled samples of client data, by computing the changes to the overall function output for a given change to the parameter in question, with-out relying on the loss. This allows our method to not only avoid requiring labeled data but importantly also pre-empt complications that could otherwise arise from the loss being at a local minimum with gradients close to zero. During the aggregation of client models, updates to the more sensitive parameters can then be reduced in magnitude, preventing the server model from drifting to any client distribution. Conversely, updates to the less sensitive parameters can be boosted to better explore different client distributions. Contributions. Elastic aggregation tackles distribution in-consistency across client data using the concept of parameter sensitivity and is simple to implement, requiring little hy-perparameter tuning. Furthermore, parameter sensitivity is computed in an online and unsupervised manner, and thus better utilizes the unlabeled data generated by the client during runtime. Elastic aggregation is easily integrated into different federated optimizers, achieving substantial improvements over naïve aggregation. The empirical results on real and synthetic data and analytical results show that elastic aggregation leads to efficient training in both convex and non-convex settings, across all four federated learning scenarios (unreliable links, massive distribution, substantial heterogeneity, and imbalanced data).2. Related work Federated learning is a fast-evolving topic. The gen-eral setup involves server and client updates; each of these updates is associated with minimizing some local loss function. The server model then benefits from all client data and achieves superior performance, for tasks such as next word prediction [17, 56], emoji prediction [45], de-coder models [8], vocabulary estimation [7], low latency vehicle-to-vehicle communication [49] and predictive mod-els in health [6]. Nevertheless, federated learning raises several issues and has been the topic of much research effort, focusing on the issues of generalization and fair-ness [33, 42], the design of more efficient communication strategies [4,24,26,27,51,52], the study of lower bounds [55], differential privacy guarantees [2], security [5], etc [22]. We focus here on relevant work that specifically addresses the four federated learning characteristics noted above -massive distribution, heterogeneity, unreliable links, and imbalanced data. Much of earlier work in this context proposes optimizing for the local risk objective with SGD [51] over mini-batches of client data, analogous to the centralized scenario, with the server then averages the received models. FedAvg [40] is a generalization of local SGD, proposing a larger number of local SGD steps per round. In the case of identical clients, it reduces to parallel SGD for which asymptotic convergence has been proven [51, 62]; more recently, [25, 43] analyzed the same method under the name of local SGD, also for 12188 identical functions. FedAvg inexactly solves client-side opti-mization, requiring the tuning of the number of epochs and the learning rate hyper-parameters in order to achieve a good accuracy-communication trade-off [30, 40]. Despite the strong empirical performance of FedAvg in IID settings, performance degrades in non-
Eisenberger_G-MSM_Unsupervised_Multi-Shape_Matching_With_Graph-Based_Affinity_Priors_CVPR_2023
Abstract We present G-MSM ( Graph-based Multi-Shape Matching), a novel unsupervised learning approach for non-rigid shape correspondence. Rather than treating a collection of input poses as an unordered set of samples, we explicitly model the underlying shape data manifold. To this end, we propose an adaptive multi-shape matching architecture that constructs an affinity graph on a given set of training shapes in a self-supervised manner. The key idea is to combine putative, pairwise correspondences by propagating maps along shortest paths in the underlying shape graph. During training, we enforce cycle-consistency between such optimal paths and the pairwise matches which enables our model to learn topology-aware shape priors. We explore different classes of shape graphs and recover specific settings, like template-based matching (star graph) or learnable ranking/sorting (TSP graph), as special cases in our framework. Finally, we demonstrate state-of-the-art performance on several recent shape correspondence benchmarks, including real-world 3D scan meshes with topological noise and challenging inter-class pairs.1
1. Introduction Shape matching of non-rigid object categories is a central problem in 3D computer vision and graphics that has been studied extensively over the last few years. Especially in recent times, there is a growing demand for such algorithms as 3D reconstruction techniques and affordable scanning devices become increasingly powerful and broadly avail-able. Classical shape correspondence approaches devise axiomatic algorithms that make specific assumptions about the resulting maps, such as near-isometry, area preservation, approximate rigidity, bounded distortion, or commutativity with the intrinsic Laplacian. In contrast, real-world scan meshes are often subject to various types of noise, including †Currently at NVIDIA 1Our implementation is available under the following link: https: //github.com/marvin-eisenberger/gmsm-matching (i) Shape graph G (ii) Affinity edge weights SourceΠ(1,2)Π(1,3)◦Π(3,2) (iii) Putative correspondences Π(1,2)vs multi-matching Π(1,3)◦Π(3,2) Figure 1. For a given collection of 3D meshes {X(i)∣1≤i≤N}, (i) our method constructs, in a fully unsupervised manner, a shape graphGwhich approximates the underlying shape data manifold. (ii) Its edge weights (affinity scores) are derived from a putative pairwise correspondence loss signal. (iii) During training, we enforce cycle-consistency by propagating maps along shortest paths in the graph G. As shown for the sample pair above (X(1),X(2)), the resulting multi-matching Π(1,3)◦Π(3,2)is significantly more accurate than the pairwise map Π(1,2). topological changes [16, 33], partial views [2], general non-isometric deformations [17, 65], objects in clutter [12], and varying data representations [57]. In this work, we address several of the aforementioned challenges and demonstrate that our proposed method achieves improved stability for a number of 3D scan mesh datasets. The majority of existing deep learning methods for shape matching [2, 15, 20, 21, 24, 39, 51, 56] treat a given set of meshes as an unstructured collection of poses. During train-ing, random pairs of shapes are sampled for which a neural network is queried and a pairwise matching loss is mini-mized. While this approach is straightforward, it often fails to recognize commonalities and context-dependent patterns This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22762 which only emerge from analyzing the shape collection as a whole. Not all samples of a shape collection are created equal. In most cases, some pairs of poses are much closer than others. Maps between similar geometries are inherently correlated and convey relevant clues to one another. This is particularly relevant for challenging real-world scenar-ios, where such redundancies can help disambiguate noisy geometries, non-isometric deformations, and topological changes. The most common approach of existing multi-matching methods is to learn a canonical embedding per pose, either in the spatial [10] or Laplace-Beltrami frequency domain [26,28]. This incentivizes the resulting matches to be consistent under concatenation. However, such approaches are in practice still trained in a fully pairwise manner for ease of training. Furthermore, relying on canonical embed-dings can lead to limited generalization for unseen test poses. Concrete approaches often assume a specific mesh resolu-tion and nearly-isometric poses [10], or require an additional fine-tuning optimization at test time [26, Sec. 5]. Rather than interpreting a given training set as a random, unstructured collection of shapes, our approach explicitly models the underlying shape manifold. To this end, we de-fine an affinity graph Gon the set of input shapes whose edge weights (i.e. affinity scores) are informed by the outputs of a pairwise matching module. We then devise a novel adaptive multi-matching architecture that propagates matches along shortest paths in the underlying shape graph G. The resulting maps are topology-aware, i.e., informed by geometries from the whole shape collection. An example is shown in Fig-ure 1, where the multi-matching Π(1,3)◦Π(3,2)obtained by our approach is significantly more accurate than the naive, pairwise map Π(1,2). During training, we promote cycle-consistency of shortest paths in the shape graph. In summary, our contributions are as follows: 1.Introduce the notion of an edge-weighted, undirected shape graph Gto approximate the underlying data man-ifold for an unordered collection of 3D meshes.
Gumeli_ObjectMatch_Robust_Registration_Using_Canonical_Object_Correspondences_CVPR_2023
Abstract We present ObjectMatch1, a semantic and object-centric camera pose estimator for RGB-D SLAM pipelines. Mod-ern camera pose estimators rely on direct correspondences of overlapping regions between frames; however, they can-not align camera frames with little or no overlap. In this work, we propose to leverage indirect correspondences ob-tained via semantic object identification. For instance, when an object is seen from the front in one frame and from the back in another frame, we can provide additional pose constraints through canonical object correspondences. We first propose a neural network to predict such correspon-dences on a per-pixel level, which we then combine in our energy formulation with state-of-the-art keypoint matching solved with a joint Gauss-Newton optimization. In a pair-wise setting, our method improves registration recall of state-of-the-art feature matching, including from 24% to 45% in pairs with 10% or less inter-frame overlap. In regis-tering RGB-D sequences, our method outperforms cutting-edge SLAM baselines in challenging, low-frame-rate sce-narios, achieving more than 35% reduction in trajectory er-ror in multiple scenes. 1https://cangumeli.github.io/ObjectMatch/1. Introduction RGB-D registration and 3D SLAM has been a funda-mental task in computer vision, with significant study and enabling many applications in mixed reality, robotics, and content creation. Central to both state-of-the-art traditional and learning-based camera pose estimation is establishing correspondences between points in input frames. However, correspondence estimation remains quite challenging when there is little or no overlap between frames. In contrast, humans can easily localize across these chal-lenging scenarios by leveraging additional semantic knowl-edge – in particular, by further localizing at the level of ob-jects and identifying matching objects between views. For instance, when observing a chair from the back and the side (e.g., in Figure 1), view overlap is minimal (or even no view overlap), resulting in failed registration from keypoint matching. However, the semantic knowledge of the chair and its object pose nonetheless enables humans to estimate the poses from which the front and side views were taken. Thus, we propose to take a new perspective on camera pose estimation and imbue camera registration with awareness of such semantic correspondences between objects for robust performance in these challenging scenarios. To this end, we propose ObjectMatch, a new paradigm for camera pose estimation leveraging canonical object cor-respondences in tandem with local keypoint correspon-dences between views. This enables significantly more robust registration under a variety of challenging scenar-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13082 ios, including low view overlap. For a sequence of in-put frames, ObjectMatch learns to semantically identify ob-jects across frames, enabling a compact, global parame-terization of 9-DoF object poses. Object correspondences are established through predicting normalized object coor-dinates [43], dense correspondences from object pixels to a canonically oriented space for each object. We then for-mulate a joint camera and object pose optimization that constrains object correspondences indirectly, operating ir-respective of the shared visibility of image regions. Our approach is complementary to state-of-the-art SLAM meth-ods, and we leverage our energy formulation to comple-ment state-of-the-art keypoint matching [10, 11, 36] in a joint Gauss-Newton optimization. Our method outperforms strong baselines in both pair-wise registration and registration of RGB-D frame se-quences. In pairwise registration of challenging Scan-Net [9] image pairs, we improve pose recall from 24% to 45% when the overlap is below 10%. On sequence registra-tion of room-scale RGB-D scenes, our method outperforms various strong baselines in difficult, low-frame-rate settings in several TUM-RGBD [39] and ScanNet [9] scenes, re-ducing the trajectory error by more than 35% in multiple challenging scenes. To sum up, our main contributions include: • An object-centric camera pose estimator that can han-dle low-overlap frame sets via indirect, canonical ob-ject correspondences established with predicted dense, per-pixel normalized object coordinates. • A joint energy formulation that leverages semantic ob-ject identification and dense, normalized object coor-dinates corresponding to canonical object geometries. • Our semantic grounding of object correspondences enables significantly more robust registration in low-overlap and low-frame-rate cases. ObjectMatch im-proves over state of the art from 24% to 45% registra-tion recall of ≤10% overlap frame pairs and achieves over 35% trajectory error reduction in several chal-lenging sequences. 2. Related Work RGB-D Registration and SLAM. In recent years, there have been many advances in indoor RGB-D reconstruction. Earlier RGB-D fusion approaches focus on frame-to-model camera tracking [20, 29]. To handle the loop closures bet-ter, more recent SLAM systems introduce explicit strategies or global optimization methods for handling loop closures through global optimization [5,10,28,37,44] to fix tracking errors. More recently, deep learning techniques have been applied to registration and SLAM scenarios, with methods ranging from geometric point cloud registration [6, 19, 31] as well as neural field based SLAM techniques [18, 40, 48].Despite all the successes in RGB-D SLAM and registra-tion, the task is still challenging since incomplete loop clo-sures observed via low-overlap frames cannot be handled, and most SLAM methods require a very high overlap be-tween consecutive frames to track cameras accurately. Feature Matching. Modern RGB(-D) camera pose es-timators rely on a feature-matching backbone. Classical global registration techniques [5,46] use FPFH features [34] over point cloud fragments. On the other hand, many global RGB-D SLAM techniques rely on sparse color fea-tures [10, 28]. While being successful in many scenarios, conventional feature matching often fails when the inter-frame overlap is low. Therefore, deep learning techniques have been utilized for predicting overlapping regions based on geometry or color. On the geometric side, Deep Global Registration [6] predicts overlapping point features using nearest neighbor search over learned geometric features [7]. Methods such as PREDATOR [19] and Geometric Trans-former [31] use attention mechanisms to target overlapping regions for registration. In the domain of color features, Su-perPoint and SuperGlue [11,36] build a formative approach in GNN-based keypoint feature matching. Methods such as LoFTR [42] introduce more dense and accurate sub-pixel level matching. Despite being very successful in handling wide-baseline scenarios, learned feature matching still re-quires a significant amount of shared visibility and geomet-ric overlap. Camera Pose Estimation with Semantic Cues. Several methods have been developed to incorporate semantic pri-ors to improve low-overlap registration. PlaneMatch [38] proposed coplanarity priors for handling loop closures and low-overlap cases. Our method instead leverages object-centric constraints, exploiting the power of semantic object recognition. Another related direction is feature hallucina-tion by leveraging image and object semantics. NeurHal [13] focuses on correspondence hallucination using image inpainting and outpainting, formulating a PnP optimiza-tion over hallucinated matches. Virtual Correspondence (VC) [24] introduces a human-centric approach that lever-ages hallucinated object volumes to form virtual epipolar constraints between frames. In contrast, we use indirect instead of direct correspondences that do not require hal-lucinated object volumes or image regions. Furthermore, our method works on a diverse set of furniture categories while VC focuses on humans. Pioneered by SLAM++ [35], there is also a rich literature of object-centric SLAM solu-tions, e.g., [26, 41]. Such SLAM methods leverage local, per-frame poses of objects to establish constraints; instead, we develop a global object pose optimization that is more robust against occluded viewpoints. Object Pose Estimation using Normalized Object Coor-dinates. 3D object pose estimation has been widely stud-13083 Figure 2. Overview of our approach to incorporate object correspondence grounding in global pose estimation. From a set of input RGB-D frames, ObjectMatch predicts object instances for each frame with dense normalized object correspondences. The predicted object instances are used to identify objects across frames, forming indirect object correspondences. We combine object correspondences with SuperGlue [11, 36] keypoint matches in a joint energy optimization that yields both camera and object poses in a global registration. ied from RGB and RGB-D inputs. Normalized Object Co-ordinate Space (NOCS) [43] was proposed to form dense correspondences from input RGB-D frames to canonical object geometries, enabling better generalization than di-rect regression of object poses. End2End SOCs [2] formu-lated a NOC-based approach for CAD retrieval and align-ment to 3D scans, using a differentiable Procrustes algo-rithm. To enable CAD alignment to single RGB images, ROCA [14] leveraged NOC space in combination with pre-dicted depths, formulating a robust differentiable Procrustes optimization [14]. Seeing Behind Objects [27] further leveraged NOC correspondences both to obtain local object poses and object completion for re-identification for RGB-D multi-object tracking. Wide Disparity Re-localization [26] uses the NOC predictions from [43] to construct an object-level map for re-localization in SLAM. In contrast to these approaches that focus on individual o
bject poses, we use NOC correspondences directly in a multi-frame, global camera, and object pose optimization. 3. Method 3.1. Problem Setup Given KRGB-D frames {(Ic 1, Id 1), ...,(Ic K, Id K)}, we aim to optimize their 6-DoF camera poses Tc= {T2, ..., T K}, assuming the first frame is the reference, i.e., T1=I. A 6-DoF camera pose Tiis represented by Euler angles γand translations t,Ti= (γx, γy, γz, tx, ty, tz). We also parameterize global, 9-DoF object poses, ¯To= (γx, γy, γz, tx, ty, tz, sx, sy, sz), comprising 6-DoF angles and translations, and 3-DoF anisotropic scales s. We formulate a joint energy optimization of the form: T∗,¯T∗=argminT,¯T(Ec(T, M) +Eo(T,¯T, N)) (1) where Mare inter-frame feature matches and Nare intra-frame canonical object correspondences established with normalized object coordinates (NOCs) that densely map Figure 3. Multi-modal object recognition and NOC prediction. Our ResNet-FPN [16, 21] backbone takes color, reversed jet col-ored depth [12], and 2x downsampled colored 3D depth normals, and produces multi-scale features by averaging different input en-codings. From the obtained features, our method recognizes ob-jects and predicts NOCs for each object, based on a Mask-RCNN [15]-style prediction. to the canonical space of an object in [−0.5,0.5]3. Ecis the feature-matching energy function, and E ois our object-centric energy function. Since robust feature matching and optimization are readily available off the shelf [10, 11, 31, 36], our method focuses on building the object-centric en-ergy E o. To this end, we need two function approximators, realized via deep neural networks: (1) a learned model for object recognition and NOC prediction, and (2) a learned model for object identification. The realization of these net-works is described in Sections 3.2 and 3.3, respectively, and energy function Eq. 1 in Section 3.4. An overview of our approach is visualized in Figure 2. 3.2. Predicting Object Correspondences To obtain object constraints in our final optimization, we recognize objects via object detection and instance seg-mentation, and predict object correspondences as dense NOCs [43] for each object, as shown in Figure 3. We build on a Mask-RCNN [15] with ResNet50-FPN [16, 21] back-13084 bone, pre-trained on ImageNet [33] and COCO [22]. To input the depth of the RGB-D frames, we propose a modified, multi-modal ResNet50-FPN [12, 16, 21] back-bone. Our backbone takes 480x640 color, 480x640 reverse jet-colored depth, and 240x320 colored depth normals as input. We average the resulting FPN features to a single feature pyramid: G=FPNc(Ic) + FPNd(Id) + U(FPNn(In)) 3,(2) where Ic, Id, Inare color, depth, and normal images, FPNc,FPNd,FPNnare the corresponding ResNet50-FPN backbones, and Uis an upsampling operator to match the normal features’ spatial size with others. This enables fine-tuning the pre-trained bounding box, class, and instance segmentation heads of Mask-RCNN [15], while also ex-ploiting depth information. We use symmetrically struc-tured FPNs, all pre-trained on ImageNet and COCO as ini-tialization, but without any parameter sharing. To obtain object correspondences, we establish map-pings from detected objects to their canonical spaces, in the form of dense NOCs. That is, for each pixel in the object’s instance mask, we predict 3D points Pnoc ousing a fully con-volutional network: Pnoc o=FCN(Go), p∈[−0.5,0.5]3∀p∈Pnoc o.(3) We optimize an ℓ1lossLnocusing ground-truth NOCs Pnoc-gt, Lnoc=X oX i||Pnoc o,i−Pnoc-gt o||1. (4) Since symmetric objects [1, 43] induce ambiguities in NOCs (e.g., a round table), we classify symmetry type of objects (round, square, rectangle, non-symmetric), csym= MLPsym(Go), optimized using a cross-entropy loss Lsym. We also make Lnocsymmetry aware, taking the minimum ℓ1difference over the set of correct NOCs [43]. We use non-symmetric objects during inference to avoid inconsis-tent NOCs across views. In addition to NOCs, we also regress anisotropic 3D object scales sousing a fully connected network, so= MLPscale(Go), and optimize sowith an ℓ1lossLscale. The object scale enables holistic object pose understand-ing within each frame and helps to filter potential object matches across views using scale consistency. Finally, to make our NOC-depth correspondences least-squares friendly for our desired optimization, we also intro-duce a per-frame differentiable Procrustes objective Lproc, using a differentiable Kabsch solver [32] to obtain local ob-ject rotations and translations: R∗ o, t∗ o=argminRo,to(X i||Ro(Pnoc o,i⊙so)+to−Pdepth o,i||2 2) (5) Figure 4. Our foreground/background metric-learning encoder for object matching, inspired by re-OBJ [3]. Using the detected and segmented objects from the model in Section 3.2, we encode foreground and background regions of objects, using light-weight, multi-modal ResNet18 encoders on the RGB-D features. for each object o, where Pdepth o are back-projected input RGB-D depths corresponding to the object’s predicted in-stance mask in its region of interest, and ⊙denotes element-wise multiplication. We train the local object poses with Lproc=wrX o||R∗ o−Rgt o||1+wtX o||t∗ o−tgt o||2 2.(6) Our full loss used for training is then L=Lm+wnLnoc+wsLscale+wsymLsym+wpLproc,(7) where Lmis the sum of Mask-RCNN losses [15] and ware scalar weights balancing additional losses. Implementation. We use the augmented 400k ScanNet train image split for training [9, 14], with Scan2CAD la-bels of the most common 9 furniture categories following [1, 14, 25]. We train a standard Detectron2 Mask-RCNN pipeline [15, 45] with 1k warm-up iterations, 0.003 base learning rate, and learning rate decays at 60k and 100k iter-ations with a total of 120k training iterations. 3.3. Matching Object Instances In our global pose optimization formulation, the relation between frames is formed via a global identification of ob-jects across frames. To enable such identification without any heuristic spatial assumptions, we train a simple metric learner that ranks object semantic similarities across frames. Our object matching is shown in Figure 4. We characterize objects for matching by their respective RGB-D features in the object instance mask, in addition to the global context in the 5-times upscaled object bound-ing box. All inputs are resized to 224x224 and input to a lightweight ResNet18 [16] backbone pre-trained on Ima-geNet [33]. Similar to object detection, we employ two backbones for color and colored depth inputs. We omit normal input in object matching, as it empirically did not provide any 13085 benefit. For each input modality, we train two ResNet18 backbones for masked and inverse-masked crops, namely foreground (object) and background (context) encodings, e=MLP([RNc(Fc),RNc(Bc)] + [ RNd(Fd),RNd(Bd)]) (8) where RN are ResNet18s, MLP is a fully connected net-work, F, B are foreground and background crops for color (c) and depth (d), and eis the object embedding vector. Given an anchor object embedding ea, a correctly match-ing object embedding ep, and a negative example en, we train our metric learning using a triplet margin loss [17]: Ltri=max(d(ea, ep)−d(ea, en) + 1.0,0), (9) where d is the ℓ2distance. We only consider triplets from the same category, as the object recognition pipeline provides classification. At infer-ence time, we match the best instances using the Hungar-ian algorithm and apply a threshold d(ei, ej)< α for the matching object pairs from the same class. This semantic matching can be scaled to multiple frames via, e.g., object tracking with re-identification, or in our case, a simple pose graph optimization over frame pairs. Implementation. We implement the identification network using PyTorch [30] and train it on ScanNet data [9]. We train the network for 100k iterations with a batch size of 8 using a momentum optimizer with a learning rate 1e-4 and momentum 0.9. 3.4. Energy Optimization We realize the joint energy minimization in Eq. 1 using keypoint and NOC constraints. Using the predicted NOC constraints with back-projected depths, we can re-write the E oin Eq. 1 as: Eo(T,¯T, Pdepth, Pnoc) =X oX cX k||TcPdepth o,c,k−¯ToPnoc o,c,k||2 2 (10) where ¯T,Trepresent 9-DoF and 6-DoF camera and ob-ject transformations, respectively, and subscripts o, c, k cor-respond to objects, cameras (frames), and points (pixels) within the frames, respectively. Here, object indices are determined by the object identification, and global object poses ¯Tindirectly constrain frames to each other without any explicit inter-frame feature matching. In many cases, object constraints alone may not be suf-ficient to optimize the camera pose (e.g., frames may not share matching objects together). However, our object-based optimization is fully complementary to classical fea-ture matching, and we thus formulate our objective in com-bination with feature-matching constraints E c: Ec(T, Pdepth) =X iX jX m,n||TiPdepth i,m−TjPdepth j,n||2 2(11)Our method is agnostic to the feature matcher, whether classical or learning-based. In this work, we experiment with two different keypoint matching systems to realize Ec, namely SuperGlue [11, 36] and Geometric Transformer [31], both offering state-of-the-art indoor feature matching. With both object and feature-matching constraints, we realize the desired joint energy formulation as T∗,¯T∗=argminT,¯T(wcEc+woEo). (12) where wc, woweight feature-matching, and object energies. Since non-linear least squares problems can be sensitive to outliers, we additionally employ outlier removal. Simi-lar to BundleFusion [10], we apply Kabsch filtering to both intra-frame and keypoint constraints, using the matching depth-NOC and depth-depth correspondences, respectively. That is, we iteratively solve an orthogonal Procrustes prob-lem and only keep the correspondences that have lower op-timization errors. We use a liberal 20cm threshold to handle wide-baseline frames. Objects and inter-frame matches are rejected if the number of NOCs is <15and the number of keypoints is <5, respectiv
Bai_High-Fidelity_Facial_Avatar_Reconstruction_From_Monocular_Video_With_Generative_Priors_CVPR_2023
Abstract High-fidelity facial avatar reconstruction from a monoc-ular video is a significant research problem in computer graphics and computer vision. Recently, Neural Radiance Field (NeRF) has shown impressive novel view rendering results and has been considered for facial avatar recon-struction. However, the complex facial dynamics and miss-ing 3D information in monocular videos raise significant challenges for faithful facial reconstruction. In this work, we propose a new method for NeRF-based facial avatar re-construction that utilizes 3D-aware generative prior. Dif-*Work done during an internship at Tencent AI Lab. †Corresponding Authors.ferent from existing works that depend on a conditional de-formation field for dynamic modeling, we propose to learn a personalized generative prior, which is formulated as a local and low dimensional subspace in the latent space of 3D-GAN. We propose an efficient method to construct the personalized generative prior based on a small set of fa-cial images of a given individual. After learning, it allows for photo-realistic rendering with novel views, and the face reenactment can be realized by performing navigation in the latent space. Our proposed method is applicable for different driven signals, including RGB images, 3DMM co-efficients, and audio. Compared with existing works, we obtain superior novel view synthesis results and faithfully face reenactment performance. The code is available here This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4541 https://github.com/bbaaii/HFA-GP .
1. Introduction Reconstructing high-fidelity controllable 3D faces from a monocular video is significant in computer graphics and computer vision and has great potential in digital human, video conferencing, and AR/VR applications. Yet it is very challenging due to the complex facial dynamics and missing 3D information in monocular videos. Recently, Neural Radiance Field (NeRF) [30] has shown impressive quality for novel view synthesis. The key idea of NeRF is to encode color and density as a function of spa-tial location and viewing direction by a neural network and adopt volume rendering techniques for novel view synthe-sis. Its photo-realistic rendering ability has sparked great interest in facial avatar reconstruction. Deformable neural radiance fields have been proposed to handle the non-rigidly deforming faces captured in monocular videos. For exam-ple, the works of [34, 35] proposed to learn a conditional deformation field to capture the non-rigidly deformation of each frame. After training, they can provide novel view syn-thesis for the training frames. However, they don’t support facial editing and cannot be used for face reenactment. The controllability of facial avatars is indispensable for many downstream applications, such as talking head syn-thesis. The core idea of existing works is to learn a dy-namic neural radiance field conditioned on specific driven signals. For example, 3D morphable face model (3DMM) [3] is introduced as guidance in NeRF-based facial avatar reconstruction [2, 11, 13]. The work of [11] learns a dy-namic NeRF that is directly conditioned on the pose and ex-pression coefficients estimated by 3DMM. In RigNeRF [2], the deformation field is a combination of a pre-calculated 3DMM deformation field prior and a learned residual condi-tioned on the pose and expression coefficients. After mod-eling, one can use 3DMM coefficients for face reenactment. In addition to the explicit 3DMM coefficients, audio-driven dynamic NeRF has also been studied [17, 41]. Recently, AD-NeRF [17] has been proposed to optimize a dynamic neural radiance field by augmenting the input with audio features. DFRF [41] further considers the few-shot audio-driven talking head synthesis scenario. These works di-rectly learn a conditional deformation field and scene repre-sentation in the continuous 5D space. However, recovering 3D information from monocular videos is an ill-posed prob-lem. It is very challenging to obtain a high-fidelity facial avatar. To alleviate the aforementioned challenges, we propose to adopt 3D generative prior. Recently, 3D-aware generative adversarial networks (3D-GAN) [5, 6, 16, 33, 43] are pro-posed for unsupervised generation of 3D scenes. By lever-aging the state-of-the-art 2D CNN generator [22] and neuralvolume rendering, the work of [5] can generate high-quality multi-view-consistent images. The latent space of 3D-GAN constitutes a rich 3D-aware generative prior, which moti-vates us to explore latent space inversion and navigation for 3D facial avatar reconstruction from monocular videos. However, 3D-GAN is usually trained on the dataset with a large number of identities, such as FFHQ [21], resulting in a generic generative prior. It is inefficient for personalized fa-cial reconstruction and reenactment, which requires faithful maintenance of personalized characteristics. In this work, we propose to learn a personalized 3D-aware generative prior to reconstruct multi-view-consistent facial images of that individual faithfully. Considering that facial variations share common characteristics, we learn a local and low-dimensional personalized subspace in the la-tent space of 3D-GAN. Specifically, we assign a group of learnable basis vectors for the individual. Each frame is sent to an encoder to regress a weight coefficient, which is used to form a linear combination of the basis. The resulting latent code is sent to a 3D-aware generator for multi-view-consistent rendering. We show that both the personalized basis and encoder can be well modeled given a small set of personalized facial images. After training, one can di-rectly project the testing frames with different facial expres-sions onto the learned personalized latent space to obtain a high-quality 3D consistent reconstruction. It is worth not-ing that the input modality is not limited to RGB frames. We demonstrate with a simple modification. The encoder can be trained with different signals, such as 3DMM ex-pression coefficients or audio features, enabling 3DMM or audio-driven face reenactment. To verify its effectiveness, we conduct experiments with different input modalities, in-cluding monocular RGB videos, 3DMM coefficients, and audio. The comparison to state-of-the-art methods demon-strates our superior 3D consistent reconstruction and faith-fully face reenactment performance. Our main contributions are four-fold: 1) we propose to utilize 3D-aware generative prior for facial avatar recon-struction; 2) we propose an efficient method to learn a lo-cal and low-dimensional subspace to maintain personalized characteristics faithfully; 3) we develop 3DMM and audio-driven face reenactment by latent space navigation; 4) we conduct complementary experimental studies and obtain su-perior facial reconstruction and reenactment performance.
Chen_Mixed_Autoencoder_for_Self-Supervised_Visual_Representation_Learning_CVPR_2023
Abstract Masked Autoencoder (MAE) has demonstrated superior performance on various vision tasks via randomly masking image patches and reconstruction. However, effective data augmentation strategies for MAE still remain open ques-tions, different from those in contrastive learning that serve as the most important part. This paper studies the prevail-ing mixing augmentation for MAE. We first demonstrate that naÈıve mixing will in contrast degenerate model performance due to the increase of mutual information (MI). To address, we propose homologous recognition, an auxiliary pretext task, not only to alleviate the MI increasement by explicitly requiring each patch to recognize homologous patches, but also to perform object-aware self-supervised pre-training for better downstream dense perception performance. With extensive experiments, we demonstrate that our proposed Mixed Autoencoder (MixedAE) achieves the state-of-the-art transfer results among masked image modeling (MIM) aug-mentations on different downstream tasks with significant efficiency. Specifically, our MixedAE outperforms MAE by +0.3% accuracy ,+1.7 mIoU and+0.9 AP on ImageNet-1K, ADE20K and COCO respectively with a standard ViT-Base. Moreover, MixedAE surpasses iBOT, a strong MIM method combined with instance discrimination, while accelerating training by 2 ×. To our best knowledge, this is the very first work to consider mixing for MIM from the perspective of pretext task design. Code will be made available.
1. Introduction Self-supervised learning (SSL) has become one of the most popular pre-training paradigm due to its independence of human annotation. Previous literature mainly focuses on the handcrafted pretext task design [ 13,19,36] and instance discrimination [ 6,10], while with the development of Vision Transformer [ 15], masked image modeling (MIM), deeply motivated by masked language modeling [ 12], has started to demonstrate more superior effectiveness by firstly masking some patches of the input images and then reconstructing the masked patches from visible ones by predicting certain 20 50 100 Total Train time (GPU-days Log-scale)82.883.083.283.483.683.8ImageNet Val Top-1 Acc (%)iBOT MoCov3 DINOMixMIMCIM-ResPix CIM-RevDet BEiTBEiT MAEMAEMAE MixedAEMixedAEMixedAE ID MIM MIM w/ IDFigure 1. Fine-tuning accuracy on ImageNet-1K. OurMixedAE achieves the best trade-off between pre-training overhead and transfer performance. Specifically, MixedAE surpasses MAE [ 22] consistently with only 3% extra overhead, while outperforms the strong iBOT [ 49] with only 53.4% of its computation overhead. See more detailed comparisons in Tab. 1. ID stands for instance discrimination, while MIM represents masked image modeling. targets generated by masked patches. In order to complete reconstruction, the encoder is expected to generate highly semantic representation which can be better transferred to downstream tasks [ 21,29,30,48] for superior performance. Existing MIM works mainly concentrate on the design of the reconstruction targets ( e.g., visual tokenizers [ 3,14], pixels [ 22,44], graphical features [ 41] and instance discrim-ination [ 2,16,49]) and masking strategies ( e.g., random [ 3, 22], attention-guide [ 25] and sample-dependent [ 39]). See more detailed discussions in Sec. 2. Despite the superior performance, we observe that the input augmentations for MIM have been seldom explored. Specifically, adding color jittering , an essential augmentation technique of contrastive learning [ 8], with MAE [ 22] even degrades the transfer re-sults, suggesting MIM might have a different preference for data augmentations, and the effective data augmentation strategies for MIM are still an open question. In this paper, we explore the usage of image mixing, a commonly used technique in both supervised [ 46,47] and contrastive learning [ 38,45], with MAE [ 22]. We start by This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22742 constructing a simple baseline to adopt mixing with MAE directly, which, different from in supervised and contrastive learning, would instead ease the reconstruction pretext by increasing the mutual information between the model input and reconstruction target due to the usage of image mixing with global self-attention as proved in Sec. 3.1. To address this issue, we propose homologous recognition , an auxiliary pretext task to enforce each patch to recognize homologous patches explicitly according to attention distributions before reconstruction, and build our Mixed Autoencoder network (MixedAE ) in Sec. 3.3. Moreover, we demonstrate that our simple yet effective method can not only achieve significant performance improvement, but also conduct object-aware SSL pre-training without any specifically designed modules for better downstream dense perception results in Sec. 3.4. Concurrently, MixMIM [ 31] also considers mixing with MAE, but different from ours, 1) Purpose : MixMIM uses mixing to recover the 2D structure after random masking for an efficient implementation to conduct MAE-style pre-training on hierarchy Vision Transformers [ 34], while ours utilizes mixing to conduct object-aware SSL pre-training for better representation learning. 2) Method : MixMIM uses masked self-attention to only perform attention within patches from the same images given the mixing masks as in-put, sharing the exactly same pretext task with MAE, while ours requires explicit homologous recognition given mixing masks as target , actively emerging mixing into the pretext design. 3) Formulation : The mixing ratio ris limited to 0.5 in MixMIM, which instead can be flexibly selected from (0,0.5]in our formulation. See more details in Sec. 3. The main contributions of this work contain three parts: 1. We propose the Mixed Autoencoder ( MixedAE ), a sim-ple yet effective approach to conduct object-aware pre-training without introducing any specifically designed modules. With extensive experiments, we demonstrate thatMixedAE can achieve the state-of-the-art transfer performance on various downstream tasks including image classification, semantic segmentation and object detection, while maintaining significant efficiency.
Choi_Restoration_of_Hand-Drawn_Architectural_Drawings_Using_Latent_Space_Mapping_With_CVPR_2023
Abstract This work presents the restoration of drawings of wooden built heritage. Hand-drawn drawings contain the most important original information but are often severely degraded over time. A novel restoration method based on the vector quantized variational autoencoders is presented. Latent space representations of drawings and noise are learned, which are used to map noisy drawings to clean drawings for restoration and to generate authentic noisy drawings for data augmentation. The proposed method is applied to the drawings archived in the Cultural Heritage Administration. Restored drawings show significant quality improvement and allow more accurate interpretations of in-formation.
1. Introduction Cultural heritage is a valuable asset of humanity that requires our efforts to preserve archaeological, historical, cultural, and technological values. In particular, traditional wooden buildings are vulnerable to deformation, earth-quakes, and fires. We continuously collect and manage ar-chitectural drawings, photos, and 3D scan data of individual buildings for preservation and restoration. Among them, ar-chitectural drawings in the past contain initial information on traditional wooden builds, and their value is the most significant. However, many archived drawings in the form of scanned images are already degraded over time, making it difficult to interpret information due to noise and damage. There is a need to restore aged drawings to facilitate infor-mation interpretation. This work reports an effort to restore aged drawings of wooden built heritage archived by the Cul-tural Heritage Administration. Aged drawings in the archive often show compound degradation with faded and deteriorated lines, smeared and blurred complex parts, and the background in faded color with smudged leakages from adjacent drawings. Restora-tion requires removing noise in the background, linking bro-ken lines, and clearing up complex parts. A learning-based restoration method would be a good match since it can model such complicated degradation. However, while clean and noisy drawings are abundant, clean and noisy pairs of the same drawings are scarce. Modeling and restora-tion by supervised learning would be problematic. Syn-thetically generated clean and degraded image pairs can be used. [10, 16, 21–23] But the domain gap between the syn-thetically degraded drawings and the actual aged drawings may cause inferior restoration performance. In this work, we proposed a vector quantized variational autoencoder [13] (VQ-V AE) based restoration method to restore aged hand-drawn architectural drawings. The pro-posed method consists of two stages. In the first stage, a VQ-V AE is trained to learn accurate latent space represen-tations of clean drawings using a large set of clean draw-ings. In the second stage, a mapping of latent space vari-ables of noisy drawings to those of clean drawings as well as the generator that produces realistic degraded drawings is learned. Degradation generator is trained to generate a noisy drawing with the residual mapping error as an input. The latent space mapping is learned using a set of draw-ing pairs, for which we use the outputs of the degradation generator as data augmentation. Noisy drawings generated by the degradation generator provide authentic variations of degradations clean drawings can suffer. Hence, the latent space mapping from noisy to clean drawings is more accu-rately learned, and the detrimental effect on the restoration performance caused by the domain gap can be mitigated. The proposed method is applied to restore archived aged architectural drawings of traditional wooden build-ings. Restoration performance was compared to other meth-ods developed for heavy degradations of drawings and pho-tographs. The proposed methods reported significant im-provement in both quantitative measures and qualitative evaluations. The performance gain is the most apparent with actual aged drawings from the archives. The proposed degradation generator produced more authentic degraded This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14164 drawings than other generative methods. The data augmen-tation with the degradation generator during the training al-lowed accurate latent space mapping could be learned for the restoration. The following can summarize the novelty of the pro-posed method. i) an effective VQ-V AE based restoration method for aged architectural drawings was proposed. ii) significant improvement in restoration performance was achieved in both quantitative measures and subjective eval-uations compared to existing learning based restoration methods, and iii) a degradation generator, which generates more realistic degradation, was developed for data augmen-tation to generalize the model.
Cazenavette_Generalizing_Dataset_Distillation_via_Deep_Generative_Prior_CVPR_2023
Abstract Dataset Distillation aims to distill an entire dataset’s knowledge into a few synthetic images. The idea is to synthe-size a small number of synthetic data points that, when given to a learning algorithm as training data, result in a model approximating one trained on the original data. Despite a recent upsurge of progress in the field, existing dataset dis-tillation methods fail to generalize to new architectures and scale to high-resolution datasets. To overcome the above issues, we propose to use the learned prior from pre-trained deep generative models to synthesize the distilled data. To achieve this, we present a new optimization algorithm that distills a large number of images into a few intermediate feature vectors in the generative model’s latent space. Our method augments existing techniques, significantly improv-ing cross-architecture generalization in all settings.
1. Introduction Many recent advancements in machine learning come from combining large networks and big data. Such trained models have shown strong capabilities to perform a wide range of diverse tasks [ 12,22,48] and are considered bysome as an ongoing paradigm shift [ 9]. While such ap-proaches show great potential to improve the frontier of AI, we, as a scientific community, are also curious about the un-derlying principles and limitations. Do networks have to be large to express the functions of interest? Do datasets have to be big? Can training on “small data” be equally successful? The seminal work on Knowledge Distillation [ 31] and re-cent discoveries such as Lottery Ticket Hypothesis [ 26] have revealed small models are often sufficient to approximate the same functions as large trained models (which are some-times useful in optimization). Dataset Distillation, proposed by [61], investigates the analogous yet orthogonal question on datasets: is there a small succinct dataset sufficient for training models? In other words, Dataset Distillation aims to distill a large dataset into a small (synthetic) one, such that training on the small dataset yields comparable performance (Figure 2). Since its proposal, Dataset Distillation has gained much attention in the research community, leading to many applications [ 14,23,43,53], and a growing series of methods that solve the distillation problem: generating a discrete set of images capable of effectively training a model [ 13,60,61,64,66–68]. In optimizing for a small, synthetic vision dataset, such methods typically optimize therawpixel values of the images. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3739 Real Training Images (1300 Per Class) Generative Latent Distillation Distilled Latent Codes (1 Per Class) Generative Prior GSynthetic Images (1 Per Class) Train Train Similar Test Performance Final Deliverable Figure 2. Rather than directly distilling a dataset into synthetic pixels (like all previous methods), our new work instead distills into the latent space of a deep generative prior . This enforces a tuneable amount of coherence in the output synthetic images, leading to far better generalization to new architectures. Unfortunately, these methods face two major challenges, limiting both their scientific value and empirical applications. First, the distilled synthetic dataset is often optimized w.r.t. a specific network architecture, but does not generalize well to other architectures. Second, while producing insightful distilled images on toy datasets, these methods generally fail to work well on larger-resolution datasets (e.g., ≥128×128 resolution) and tend to distill visually noisy images with subpar performance. In this work, we argue that both issues are caused by pa-rameterizing the synthetic dataset in pixel space. Directly op-timizing pixels can be susceptible to learning high-frequency patterns that overfit the specific architecture used in train-ing. To address this, we consider regularizing the distillation process to some prior that may help cross-architecture gener-alization. However, how and where to perform this regular-ization poses a delicate balance. For example, restricting our synthetic set to the real data manifold can significantly re-duce the cross-architecture performance gap but is too strong a regularization to learn good distilled datasets. In the limit, it reduces to dataset/coreset selection [ 7,10,29,57], which is known to not work as well [13, 61, 64, 67]. We propose Generative Latent Distillation ( GLaD ), which utilizes a deep generative prior by parameterizing the synthetic dataset in the intermediate feature space of generative models, such as Generative Adversarial Networks (GANs) [ 28]. It encourages the learned datasets to be more generalizable to novel architectures but is also lax enough to notprohibitively restrict the expressiveness of the distilled dataset. GLaD acts as an add-on module and can easily be ap-plied to all existing and future methods of dataset distillation. There is flexibility in choosing which generative modelto use as our prior. By using a generator trained on the target dataset (which is input to the distillation algorithm), our prior uses no additional data or information but consistently im-proves various distillation algorithms. However, for the sole purpose of obtaining the best distilled synthetic dataset, we may use more powerful generators trained on larger datasets and also obtain significant gains. On the other extreme, we explore using randomly initialized generators and genera-tors trained on out-of-distribution datasets. We show that they generate aesthetically pleasing synthetic images with distinct visual characteristics and also achieve comparable distillation performance. In short, while different generator choices affect distillation results in interesting ways, GLaD consistently improves performance over many datasets and multiple distillation algorithms. Within a deep generative model, there is a spectrum of different latent space choices, corresponding to different lay-ers in the model [ 2,47,71]. Our analysis reveals a trade-off between realism (earlier layer latents) and flexibility (later layer latents), and highlights that using an intermediate latent space achieves a nice balance and consistent performance gain (over the wildly used raw-pixel parametrization). In Section 4, we perform extensive experiments on CIFAR-10 (a common dataset distillation benchmark) and ImageNet subsets at resolutions up to 512×512. We inte-grateGLaD with three distinct current distillation algorithms (Gradient Matching [ 67], Distribution Matching [ 66], and Trajectory Matching [ 13]), and consistently observe signif-icant improvements in cross-architecture generalization of the three currently most accessible methods of dataset distil-lation. Our analysis of results from different configurations provides a better understanding of the effect of different gen-3740 erative models and latent spaces. Additionally, our method drastically reduces the high-frequency noise present in high-resolution datasets distilled into pixel space, leading to visu-ally pleasing images that may have implications in artistic and design applications (e.g., [14]). GLaD is a plug-and-play addition to any existing and fu-ture distillation methods, allowing them to scale up to more realistic datasets and generalize better to different architec-tures. Given the goal of distilling large-scale datasets, we propose that leveraging generative models and differentiable parameterizations is a natural path forward. Our contributions are summarized as follows: •We propose Generative Latent Distillation (GLaD ) to add a deep generative prior to Dataset Distillation by distilling into an intermediate feature space of a genera-tive model trained on real data. •Our extensive analysis and ablations highlight the im-portance of a deep generative prior in addressing the two major challenges of Dataset Distillation: cross-architecture generalization and high-resolution data. •We show that GLaD is robust to the type of generator used, still performing well with randomly initialized generators and those trained on other datasets. •Our method acts as a plug-and-play addition to all exist-ing and future methods of dataset distillation, allowing researchers to easily use it for future work.
Cong_Learning_To_Dub_Movies_via_Hierarchical_Prosody_Models_CVPR_2023
Abstract Given a piece of text, a video clip and a reference audio, the movie dubbing (also known as visual voice clone, V2C) task aims to generate speeches that match the speaker’s emotion presented in the video using the de-sired speaker voice as reference. V2C is more challeng-ing than conventional text-to-speech tasks as it additionally requires the generated speech to exactly match the vary-ing emotions and speaking speed presented in the video. Unlike previous works, we propose a novel movie dub-bing architecture to tackle these problems via hierarchi-cal prosody modeling, which bridges the visual informa-tion to corresponding speech prosody from three aspects: lip, face, and scene. Specifically, we align lip move-ment to the speech duration, and convey facial expres-sion to speech energy and pitch via attention mechanism based on valence and arousal representations inspired by the psychology findings. Moreover, we design an emo-tion booster to capture the atmosphere from global video scenes. All these embeddings are used together to gener-ate mel-spectrogram, which is then converted into speech waves by an existing vocoder. Extensive experimental re-sults on the V2C and Chem benchmark datasets demon-strate the favourable performance of the proposed method. The code and trained models will be made available at https://github.com/GalaxyCong/HPMDubbing.
1. Introduction Movie dubbing, also known as visual voice clone (V2C) [9], aims to convert a paragraph of text to a speech with both desired voice specified by reference audio and de-sired emotion and speed presented in the reference video as shown in the top panel of Figure 1. V2C is more challeng-ing than other speech synthesis tasks in two aspects: first, †Corresponding author. (a) Visual Voice Cloning (V2C) (b) Hierarchical Prosody Modeling … Silent Video Reference Audio 📝: I can't wait to meet everyone! Text/Subtitle VisualSourceLip MovementScene Atmosphere Affective Display Tempo and PausePitch and EnergyGlobal EmotionSurprisedProsodyAttributes SpeechFigure 1. (a) Illustration of the V2C tasks. (b) To generate natural speech with proper emotions, we align the phonemes with lip mo-tion, estimate pitch and energy based on facial expression’s arousal and valence, and predict global emotion from video scenes. it requires synchronization between lip motion and gener-ated speech; second, it requires proper prosodic variations of the generated speech to reflect the speaker’s emotion in the video ( i.e., the movie’s plot). These pose significant challenges to existing voice cloning methods. Although significant progress has been made, exist-ing methods do not handle the challenges in V2C well. Specifically, text-based dubbing methods [46–48, 54] con-struct speeches from given text conditioned on the differ-ent speaker embedding but do not consider audio-visual synchronization. On the other hand, lip-referred dubbing schemes [18,32,55] predict mel-spectrograms directly from a sequence of lip movements typically by encoder-decoder models. Due to high error rates in generated words, these methods can hardly guarantee high-quality results. Further-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14687 more, video-and-text based dubbing methods [17, 20, 32] focus on inferring speaker characters ( e.g., age and gender). However, these visual references usually do not convey tar-geted emotion well as intended in V2C. An ideal dub should align well with the target charac-ter so that the audiences feel it is the character speaking instead of the dubber [7]. Thus, a professional dubber usu-ally has a keen sense of observing the unique characteris-tics of the subject and acts on voice accordingly. In this work, we address these issues with a hierarchical dubbing architecture to synthesize speech. Unlike previous methods, our model connects video representations to speech coun-terparts at three levels: lip, face, and scene, as shown in Figure 1. In this paper, we propose a hierarchical prosody model-ing for movie dubbing, which could keep the audio-visual sync and synthesis speech with proper prosody follow-ing the movie’s plot. Specifically, we first design a dura-tion alignment module that controls speech speed by learn-ing temporal correspondence via multi-head attention over phonemes and lip motion. Second, we propose an affective-display based Prosody Adaptor (PA), which learns affec-tive psychology computing conditioned on facial expres-sion and is supervised by corresponding energy and pitch in the target voice. In particular, we introduce arousal and valence features extracted from facial regions as emotion representations. This is inspired by the affective comput-ing method [51], which analyses the facial affect relying on dimensional measures, namely valence (how positive the emotional display is) and arousal (how calming or exciting the expression looks). Third, we exploit a scene-atmosphere based emotion booster, which fuses the global video rep-resentation with the above adapted hidden sequence and is supervised by the emotive state of the whole voice. The out-puts of these three modules are fed into a transformer-based decoder, which converts the speech-related representations into mel-spectrogram. Finally, we output the target speech waves from the mel-spectrogram via a powerful vocoder. The contributions of this paper are summarized below: • We propose a novel hierarchical movie dubbing archi-tecture to better synthesize speech with proper prosody by associating them with visual counterparts: lips, fa-cial expressions, and surrounding scenes. • We design an affective display-based prosody adap-tor to predict the energy and pitch of speech from the arousal and valence fluctuations of facial regions in videos, which provides a fine-grained alignment with speakers’ emotions. • Extensive experimental results demonstrate the pro-posed method performs well against state-of-the-art models on two benchmark datasets.2. Related Work Text to Speech Synthesis . Over the recent years, nu-merous TTS models [2, 29, 40, 41, 47, 48, 54] have been proposed for generating high-quality natural speech condi-tioned on given text. Tacotron [54] is an end-to-end genera-tive TTS model that synthesizes speech directly from char-acters. Then, Tacotron2 [29] replaces the RNN structures by introducing the attention mechanism to improve training ef-ficiency and solve the long dependency issue. Furthermore, FastSpeech [47] and Fastspeech2 [46] exploit the Feed-Forward Transformer (FFT) to generate mel-spectrogram from phoneme sequences. Despite the impressive voice generated, these methods cannot provide the audio with de-sired emotion and audio-visual sync for movie dubbing. Lip to Speech Synthesis. This task aims to reconstruct speech based on the lip motions alone [3,25]. Lip2Wav [42] is a sequence-to-sequence architecture focusing on learning mappings between lip and speech for individual speakers. Recently, [15,18,49,55] improve the architecture and train-ing methods, and provide the possibility of unconstrained speech synthesis in the wild. However, lip-to-speech is in-competent for movie dubbing because the word error rate is still high [1, 3, 11, 13, 16]. In this work, we focus on recon-structing accurate speech from lip motions and generating the desired emotion and identity with proper prosody. Talking Heads. Numerous methods have been developed for audio-visual translation [58] or speaking style trans-fer [57] by reconstructing the visual content in video [8, 30, 31, 50, 53, 59, 61–63]. Wav2Lip [43] uses an expert lip-syncs discriminator to morph lip movements of arbitrary identities. Recently, Papantoniou et al. [38] develop a Neu-ral Emotion Director (NED) to manipulate emotions while preserving speech-related lip movements. However, these methods cannot adapt to the movie dubbing task because they emphasize using generative models to readjust the fa-cial regions instead of reconstructing the desired speech. Visual Voice Cloning. Movie dubbing, also known as vi-sual voice clone, aims to convert scripts to speech with both desired voice identity and emotion based on the reference audio and video. To control the speed of generated speech, Neural Dubber [20] exploits a text-video aligner by using scaled dot-product attention mechanism. VTTS [17] uses multi-source attention to fuse the triplets feature and out-puts the mel-spectrogram via an RNN-based decoder. Since explicit emotion categories [28] do not exist in these meth-ods, Chen et al. [9] develops a V2C model on a more chal-lenging Densiny Animation dataset, which concentrates on emotional dubbing for movie characters. Although the V2C considers emotion labels, the adopted global video repre-sentation negatively affects the fine-grained emotional ex-pression and makes it challenging to render correct prosody corresponding to plot developments. To solve this issue, we 14688 Scene 📝Ah! Did de la Cruz become the world's best musician.Pre-set Script Silent MovieReference Audio FaceLip Phoneme FeaturesSpeaker FeaturesLip-M FeaturesFacial FeaturesValence FeaturesScene FeaturesArousal FeaturesProsody AdaptorBridge Arousalwith EnergyBridge Valencewith PitchConcat.………Duration Aligner………… Mel-GeneratorPositional EncodingEncoding PositionalMel-DecoderMel-linearAtmosphereBoosterPredictorCross-Attention……… FFTBlockFFTBlockConv-TransposeMulti-Head Attention……InputOutput V ocoderFigure 2. Architecture of the proposed hierarchical modular network for movie dubbing, which consists of four main components: Duration Aligner (Sec. 3.1), which learns to predict speech duration based on aligning lip movement and text phoneme; Prosody Adaptor (Sec. 3.2), which predicts energy and pitch from facial arousal and valence, respectively; Atmosphere Booster (Sec. 3.3), which learns a global emotion embedding from a video scene level; and Mel-Generator (Sec. 3.4), which generates mel-spectrograms from embeddings obtained by the aforementioned three modules. The mel-spectrograms are finally converted to audio by a widely adopted vocoder. propose a hierarchical movie dubbing architecture to better synthesize speech with proper prosody and emotion. 3. Method The main architecture of the proposed model is shown in Fig. 2. First, we use a phoneme encoder [9] to con-vert the input text Ztextto a series of phoneme embeddings O={o1, ...,oL}and use a speaker encoder Fspk[9] to capture the voice characteristics Ufrom different speakers. Then, taking phonemes and lip regions as input, the dura-tion aligner module uses a multi-head attention mechanism to learn to associate phonemes with related lip movements. Next, the affective display-based prosody adaptor module learns to predict the energy and pitch of the desired speech based on arousal and valence features extracted from facial expressions, respectively. And then, the scene atmosphere booster encodes a global representation of emotion of the entire video content. All the outputs of the above three mod-ules are combined to generate mel-spectrograms, which are finally transformed to a waveform Yvoice using a adopted vocoder. We detail each module below. 3.1. Duration Aligner The duration aligner contains three steps: (1) extracting the lip features from movie; (2) aligning the phonemes oftext with the lips; (3) expanding the fused phoneme-lip rep-resentation to the desired mel-spectrogram length. Extracting lip feature. LetDw,DhandDcbe the width, height and number of channels of the video frames, respec-tively. We first extract lip regions xm∈RTv×Dw×Dh×Dc from the given video using mouth region pre-processing from [20, 33–36]. Then we exploit the LipEncoder to ob-tain the lip movement representation: Elip= LipEncoder( xm)∈RTv×Dm, (1) where Tvdenotes the number of video frames, and Dm is the hidden dimension of the dynamic lip feature. The LipEncoder consists of several feed-forward transformer blocks that are suitable for capturing both long-term and short-term dynamics lip movement features. Aligning text with lips . Inspired by the success of atten-tion mechanism for cross-modality alignment [10, 14, 19, 26, 27, 44, 45, 64], we adopt multi-head attention to learn the alignment between the text phoneme and the lip move-ment sequence. We use lip embedding as a query to com-pute the attention on text phonemes. The larger the atten-tion, the more related between a lip embedding and a text phoneme. Due to variations of mouth shapes and pronun-ciations, the multi-head attention mechanism is suitable for learning their alignments from different aspects. The text-video conte
Ding_DiffusionRig_Learning_Personalized_Priors_for_Facial_Appearance_Editing_CVPR_2023
Abstract We address the problem of learning person-specific facial priors from a small number (e.g., 20) of portrait photos of the same person. This enables us to edit this specific person’s fa-cial appearance, such as expression and lighting, while pre-serving their identity and high-frequency facial details. Key to our approach, which we dub DiffusionRig, is a diffusion model conditioned on, or “rigged by, ” crude 3D face models estimated from single in-the-wild images by an off-the-shelf estimator. On a high level, DiffusionRig learns to map sim-plistic renderings of 3D face models to realistic photos of a given person. Specifically, DiffusionRig is trained in two stages: It first learns generic facial priors from a large-scale face dataset and then person-specific priors from a small portrait photo collection of the person of interest. By learn-ing the CGI-to-photo mapping with such personalized priors, †Work done during an internship at Adobe.DiffusionRig can “rig” the lighting, facial expression, head pose, etc. of a portrait photo, conditioned only on coarse 3D models while preserving this person’s identity and other high-frequency characteristics. Qualitative and quantitative experiments show that DiffusionRig outperforms existing approaches in both identity preservation and photorealism. Please see the project website: https://diffusionrig.github.io for the supplemental material, video, code, and data.
1. Introduction It is a longstanding problem in computer vision and graph-ics to photorealistically change the lighting, expression, head pose, etc. of a portrait photo while preserving the person’s identity and high-frequency facial characteristics. The dif-ficulty of this problem stems from its fundamentally under-constrained nature, and prior work typically addresses this with zero-shot learning, where neural networks were trained on a large-scale dataset of different identities and tested on a new identity. These methods ignore the fact that such This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12736 generic facial priors often fail to capture the test identity’s high-frequency facial characteristics, and multiple photos of the same person are often readily available in the person’s personal photo albums, e.g., on a mobile phone. In this work, we demonstrate that one can convincingly edit a person’s fa-cial appearance, such as lighting, expression, and head pose, while preserving their identity and other high-frequency fa-cial details. Our key insight is that we can first learn generic facial priors from a large-scale face dataset [ 19] and then finetune these generic priors into personalized ones using around 20 photos capturing the test identity. When it comes to facial appearance editing, the natural question is what representation one uses to change lighting, expression, head pose, hairstyle, accessories, etc. Off-the-shelf 3D face estimators such as DECA [ 9] can already ex-tract, from an in-the-wild image, a parametric 3D face model that comprises parameters for lighting (spherical harmonics), expression, and head pose. However, directly rendering these physical properties back into images yields CGI-looking re-sults, as shown in the output columns of Figure 1. The reasons are at least three-fold: (a) The 3D face shape esti-mated is coarse, with mismatched face contours and misses high-frequency geometric details, (b) the assumptions on reflectance (Lambertian) and lighting (spherical harmonics) are restrictive and insufficient for reproducing the reality, and (c) 3D morphable models (3DMMs) simply cannot model all appearance aspects including hairstyle and accessories. Nonetheless, such 3DMMs provide us with a useful rep-resentation that is amenable to “appearance rigging” since we can modify the facial expression and head pose by sim-ply changing the 3DMM parameters as well as lighting by varying the spherical harmonics (SH) coefficients. On the other hand, diffusion models [ 15] have recently gained popularity as an alternative to Generative Adversarial Networks (GANs) [ 11] for image generation. Diff-AE [ 33] further shows that when trained on the autoencoding task, diffusion models can provide a latent space for appearance editing. In addition, diffusion models are able to map pixel-aligned features (such as noise maps in the vanilla diffusion model) to photorealistic images. Although Diff-AE is capa-ble of interpolating from, e.g., smile to no smile, after seman-tic labels are used to find the direction to move towards, it is unable to perform edits that require 3D understanding and that cannot be expressed by simple binary semantic labels. Such 3D edits, including relighting and head pose change, are the focus of our work. To combine the best of both worlds, we propose Diffusion-Rig, a model that allows us to edit or “rig” the appearance (such as lighting and head pose) of a 3DMM and then pro-duce a photorealistic edited image conditioned on our 3D edits. Specifically, DiffusionRig first extracts rough physical properties from single portrait photos using an off-the-shelf method [ 9], performs desired 3D edits in the 3DMM space,and finally uses a diffusion model [ 15] to map the edited “physical buffers” (surface normals, albedo, and Lambertian rendering) to photorealistic images. Since the edited im-ages should preserve the identity and high-frequency facial characteristics, we first train DiffusionRig on the CelebA dataset [ 27] to learn generic facial priors so that Diffusion-Rig knows how to map surface normals and the Lambertian rendering to a photorealistic image. Note that because the physical buffers are coarse and do not contain sufficient identity information, this “Stage 1 model” provides no guar-antee for identity preservation. At the second stage, we finetune DiffusionRig on a tiny dataset of roughly 20 images of one person of interest, producing a person-specific diffu-sion model mapping physical buffers to photos of just this person. As discussed, there are appearance aspects not mod-eled by the 3DMM, including but not limited to hairstyle and accessories. To provide our model with this additional in-formation, we add an encoder branch that encodes the input image into a global latent code (“global” in contrast to phys-ical buffers that are pixel-aligned with the output image and hence “local”). This code is chosen to be low-dimensional in the hope of capturing just the aspects notmodeled by the 3DMM, such as hairstyle and eyeglasses. In summary, our contributions are: •A deep learning model for 3D facial appearance editing (that modifies lighting, facial expression, head pose, etc.) trained using just images with no 3D label, •A method to drive portrait photo generation using diffu-sion models with 3D morphable face models, and •A two-stage training strategy that learns personalized facial priors on top of generic face priors, enabling edit-ing that preserves identity and high-frequency details.
Hassani_Neighborhood_Attention_Transformer_CVPR_2023
Abstract We present Neighborhood Attention (NA) , the first ef-ficient and scalable sliding window attention mechanism for vision. NA is a pixel-wise operation, localizing self at-tention (SA) to the nearest neighboring pixels, and there-fore enjoys a linear time and space complexity compared to the quadratic complexity of SA. The sliding window pattern allows NA’s receptive field to grow without needing extra pixel shifts, and preserves translational equivariance, un-like Swin Transformer’s Window Self Attention (WSA). We develop NATTEN (Neighborhood Attention Extension), a Python package with efficient C++ and CUDA kernels, which allows NA to run up to 40% faster than Swin’s WSA while using up to 25% less memory. We further present Neighborhood Attention Transformer (NAT) , a new hier-archical transformer design based on NA that boosts image classification and downstream vision performance. Exper-imental results on NAT are competitive; NAT-Tiny reaches 83.2% top-1 accuracy on ImageNet, 51.4% mAP on MS-COCO and 48.4% mIoU on ADE20K, which is 1.9% Im-ageNet accuracy, 1.0% COCO mAP , and 2.6% ADE20K mIoU improvement over a Swin model with similar size. To support more research based on sliding window attention, we open source our project and release our checkpoints.1. Introduction Convolutional neural networks (CNNs) [ 19] have been the de facto standard architecture for computer vision mod-els across different applications for years. AlexNet [ 18] showed their usefulness on ImageNet [ 10], and many oth-ers followed suit with architectures such as VGG [ 26], ResNet [ 17], and EfficientNet [ 27]. Transformers [ 31] on the other hand, were originally proposed as attention-based models for natural language processing (NLP), trying to ex-ploit the sequential structure of language. They were the ba-sis upon which BERT [ 11] and GPT [ 2,23,24] were built, and they continue to be the state of the art architecture in NLP. In late 2020, Vision Transformer (ViT) [ 12] was pro-posed as an image classifier using only a Transformer En-coder operating on an embedded space of image patches, mostly for large-scale training. A number of other meth-ods followed, attempting to increase data efficiency [ 13,15, 28], eventually making such Transformer-like models the state of the art in ImageNet-1K classification (without pre-training on large-scale datasets such as JFT-300M). These high-performing Transformer-like methods are all based on Self Attention (SA), the basic building block in the original Transformer [ 31]. SA has a linear complex-ity with respect to the embedding dimension (excluding lin-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6185 ear projections), but a quadratic complexity with respect to the number of tokens. In the scope of vision, the num-ber of tokens is typically in linear correlation with image resolution. As a result, higher image resolution results in a quadratic increase in complexity and memory usage in models strictly using SA, such as ViT. The quadratic com-plexity has prevented such models from being easily appli-cable to downstream vision tasks, such as object detection and segmentation, in which image resolutions are usually much larger than classification. Another problem is that convolutions benefit from inductive biases such as locality, and the 2-dimensional spatial structure, while dot-product self attention is a global 1-dimensional operation by defini-tion. This means that some of those inductive biases have to be learned with either large sums of data [ 12] or advanced training techniques and augmentations [ 15,28]. Local attention modules were therefore proposed to alle-viate these issues. Stand-Alone Self-Attention (SASA) [ 25] was one of the earliest applications of local window-based attention to vision, where each pixel attends to a window around it. Its explicit sliding window pattern is identical to that of same convolutions, with zero paddings around and a simple 2-dimensional raster scan, therefore maintaining translational equivariance. SASA was aimed at replacing convolutions in a ResNet, and was shown to have a no-ticeable improvement over baselines. However, the authors noted SASA was limited in terms of speed due to the lack of an efficient implementation similar to that of convolu-tions. Swin [ 21] on the other hand was one of the first hierarchical vision transformers based on local self atten-tion. Its design and the shifted-window self attention al-lowed it to be easily applicable to downstream tasks, as they made it computationally feasible, while also boosting performance through the additional biases injected. Swin’s localized attention, however, first applies self attention to non-overlapping windows and then shifts the windows, the motivation of which was sliding window methods such as SASA suffering throughput bottlenecks. HaloNet [ 30] used a haloing mechanism that localizes self attention for blocks of pixels at a time, as opposed to pixel-wise. One of their key motivations for this was also noted to be the lack of an efficient sliding window attention. In this work, we revisit explicit sliding window attention mechanisms, and propose Neighborhood Attention (NA). NA localizes SA to each pixel’s nearest neighbors, which is not necessarily a fixed window around the pixel. This change in definition allows all pixels to maintain an iden-tical attention span, which would otherwise be reduced for corner pixels in zero-padded alternatives (SASA). NA also approaches SA as its neighborhood size grows, and is equiv-alent to SA at maximum neighborhood. Additionally, NA has the added advantage of maintaining translational equiv-ariance [ 30], unlike blocked and window self attention. WeQuery/circlemultiplytextKeyValue/circlemultiplytextQuery/circlemultiplytextKeyValue PositionalBias/circlemultiplytextNeighborhoodAttentionSelfAttention Figure 2. Illustration of the query-key-value structure of Neighborhood Attention (NA) vs Self Attention (SA) for a sin-gle pixel. SA allows each pixel to attend to every other pixel, whereas NA localizes attention for each pixel to a neighborhood around itself. Therefore, each pixel’s attention span is usually dif-ferent from the next. develop NATTEN , a Python package with efficient C++ and CUDA kernels that allow NA to run even faster than Swin’s WSA in practice, while using less memory. We build Neighborhood Attention Transformer (NAT), which achieves competitive results across vision tasks. To summarize, our main contributions are:
1.Proposing Neighborhood Attention (NA) : A simple and flexible explicit sliding window attention mech-anism that localizes each pixel’s attention span to its nearest neighborhood, approaches self attention as its span grows, and maintains translational equivariance. We compare NA in terms of complexity and memory usage to self attention, window self attention, and con-volutions.
Cen_Enlarging_Instance-Specific_and_Class-Specific_Information_for_Open-Set_Action_Recognition_CVPR_2023
Abstract Open-set action recognition is to reject unknown human action cases which are out of the distribution of the training set. Existing methods mainly focus on learning better un-certainty scores but dismiss the importance of feature rep-resentations. We find that features with richer semantic di-versity can significantly improve the open-set performance under the same uncertainty scores. In this paper, we begin with analyzing the feature representation behavior in the open-set action recognition (OSAR) problem based on the information bottleneck (IB) theory, and propose to enlarge the instance-specific (IS) and class-specific (CS) informa-tion contained in the feature for better performance. To this end, a novel Prototypical Similarity Learning (PSL) frame-work is proposed to keep the instance variance within the same class to retain more IS information. Besides, we no-tice that unknown samples sharing similar appearances to known samples are easily misclassified as known classes. To alleviate this issue, video shuffling is further introduced in our PSL to learn distinct temporal information between original and shuffled samples, which we find enlarges the CS information. Extensive experiments demonstrate that the proposed PSL can significantly boost both the open-set and closed-set performance and achieves state-of-the-art results on multiple benchmarks. Code is available at https://github.com/Jun-CEN/PSL .
1. Introduction Deep learning methods for video action recognition have developed very fast and achieved remarkable performance in recent years [1–4]. However, these methods operate un-der the closed-set condition, i.e., to classify all videos into *Work done as an intern at Alibaba DAMO Academy. CSISInformationOoD𝑠!𝑠"𝑠#𝑠$𝑠%TSMI3DSlowFastPSL (Ours)(a)(b) (c)(d)Class 1Class 2InDiiiiiiiiiiiiiiii: C.E.ii: PSL (ours)Figure 1. (a) Richer semantic features brought by the pretraining can significantly improve the open-set performance. (b) Informa-tion in the feature is divided into IS and CS information. s4can be identified as OoD since it has distinct IS information (IS bars in different colors) with s1ands2, while s5has distinct CS infor-mation (CS bars in different colors) with all InD samples so it may be OoD. Our PSL aims to learn more IS and CS information (bars in longer lengths) than Cross-Entropy (C.E.). (c) Both enlarged IS and CS information boosts the open-set performance. (d) Our PSL achieves the best OSAR performance. one of the classes encountered during training. This closed-set condition is not practical in the real-world scenario, as videos whose classes are beyond the range of the train-ing set will be misclassified as one of the known classes. Therefore, open-set action recognition (OSAR) is proposed to require the network to correctly classify in-distribution (InD) samples and identify out-of-distribution (OoD) sam-ples. InD and OoD classes refer to classes involved and not involved in the training set, respectively. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15295 Open-set video action recognition is systematically stud-ied in the recent work [5], in which they transfer the existing methods for open-set image recognition into the video do-main [6–9] as the baselines, and propose their own method to introduce deep evidential learning [10] to calculate the uncertainty and propose a contrastive evidential debiasing module to alleviate the appearance bias issue in the video domain. All of these methods tend to improve the OSAR performance by calculating a better uncertainty score, based on the feature representations extracted by the neural net-work (NN). However, the main purpose of training in these methods is still to classify InD samples, which determines the learned feature representations are merely sufficient for InD classification. We find that almost all methods have a significantly better open-set performance when the NN is pretrained with a large dataset (Fig. 1 (a)), so we argue that the diversity of feature representation is extremely impor-tant for the OSAR task. Therefore, we propose to boost the open-set ability from the feature representation perspective rather than finding a better uncertainty score. We first analyze the feature representation behavior in the open-set problem based on the information bottleneck (IB) theory [11, 12]. We divide the information of the fea-ture into Instance-Specific (IS) andClass-Specific (CS) in-formation . CS information is used for inter-class recogni-tion, so it is similar for samples within the same class but different for samples from other classes. IS information is the special information of each sample within the same class, as two samples cannot be exactly the same even if they belong to the same class. Both CS and IS information are crucial for the open-set task, as illustrated in Fig. 1 (b), where s4ands5can be identified as OoD samples based on the IS and CS information, respectively. We find that the closed-set classification setting tends to eliminate IS in-formation during training, and cannot fully extract the mini-mum sufficient CS information for the classification task, so we aim to enlarge IS and CS information in learned feature representations for better OSAR performance. To enlarge the IS information, we propose the Prototypi-cal Similarity Learning (PSL) framework, in which the rep-resentation of an instance is encouraged to have less than 1 similarity with the corresponding prototype. In this way, we encourage the IS information to be retained and not elimi-nated. In addition, [5] finds that OoD videos can be easily classified as InD videos in a similar appearance. To allevi-ate this issue, we introduce the shuffled video into PSL and make it have less than 1 similarity with the original sample. As the shuffled video almost shares the same appearance in-formation with the original one, we encourage the similarity to be less than 1 so that the network can extract the distinct temporal information among them. We find this technique actually enlarges the CS information in the feature repre-sentation. Fig. 1 (c) shows that enlarging the IS informa-tion is helpful for the open-set performance, and more CS information can further benefit the open-set and closed-set performance. To summarize, our contributions include: • We provide a novel perspective to analyze the open-set recognition task based on the information bottleneck the-ory, and find that the classical closed-set cross-entropy tends to eliminate the IS information which is helpful to identify OoD samples. • We propose to enlarge the IS and CS information for bet-ter OSAR performance. Specifically, PSL is designed to retain the IS information in the features, and we involve video shuffling in PSL to learn more CS information. • Experiments on multiple datasets and backbones show our PSL’s superiority over a large margin compared to other state-of-the-art counterparts, as shown in Fig. 1 (d).
Huang_Progressive_Spatio-Temporal_Alignment_for_Efficient_Event-Based_Motion_Estimation_CVPR_2023
Abstract In this paper, we propose an efficient event-based motion estimation framework for various motion models. Differ-ent from previous works, we design a progressive event-to-map alignment scheme and utilize the spatio-temporal cor-relations to align events. In detail, we progressively align sampled events in an event batch to the time-surface map and obtain the updated motion model by minimizing a novel time-surface loss. In addition, a dynamic batch size strat-egy is applied to adaptively adjust the batch size so that all events in the batch are consistent with the current mo-tion model. Our framework has three advantages: a) the progressive scheme refines motion parameters iteratively, achieving accurate motion estimation; b) within one iter-ation, only a small portion of events are involved in opti-mization, which greatly reduces the total runtime; c) the dynamic batch size strategy ensures that the constant ve-locity assumption always holds. We conduct comprehen-sive experiments to evaluate our framework on challeng-ing high-speed scenes with three motion models: rotational, homography, and 6-DOF models. Experimental results demonstrate that our framework achieves state-of-the-art estimation accuracy and efficiency. The code is available athttps://github.com/huangxueyan/PEME .
1. Introduction Event cameras [25, 30, 33], also known as bio-inspired silicon retinas, are novel vision sensors that asynchronously respond to pixel-wise brightness changes. Event cameras have the properties of high temporal resolution and high dy-namic range, which make event cameras appealing to tackle many computer vision tasks under challenging conditions, such as high-speed pose estimation [5, 18, 19], HDR video generation [27,31,32,34], 3D reconstruction [9,11,26] and low-latency motion estimation [7, 10, 24]. Event-based motion estimation aims to find the ego-*Corresponding author xyt(a) Event stream(b) Unaligned events(c) Aligned events Time (ms)15 205 0 10 xy yx!"#!"$!"%!"#!"$!"%!Figure 1. Event-based motion estimation. (a): A segment of the event cloud generated by rotational motion from the shapes rotation dataset [20]. (b): An event frame generated from unaligned events. (c): An event frame generated from aligned events. motion of the event camera. Since events can be triggered by the motion of the camera, the alignment of events is highly correlated with the camera’s motion. The motion es-timation problem is normally cast to an optimization prob-lem based on the alignment of events [4]. With correct mo-tion parameters, events triggered by the same world point can be aligned to the same pixel, forming an event frame with sharp edges. As for unaligned events, they generate blurred edges in the event frame. Fig. 1 shows a segment of the event cloud as well as two event frames with unaligned and aligned events. Many approaches have been proposed for event-based motion estimation, such as contrast maximization, entropy minimization, and Poisson point process [6, 8, 22]. These methods follow the same procedure: slice an event cloud into batches with a fixed size or a fixed time interval, and then optimize a loss function with all the events in the batch. In practice, an event batch usually contains tens of thou-sands of events. It is very time-consuming and computa-tionally redundant to involve such an amount of event data to optimize a motion transformation with 3 or 6 degrees of freedom (DOF). We observe that events approximately fol-low the same motion transformation in a short period; thus, it is probably not necessary to take all the events into ac-count for motion estimation. From this point of view, we attempt to utilize sampled events to reduce the computa-tional burden. To achieve this, we propose a distinct event-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1537 to-map alignment scheme. In specific, we construct a time-surface (TS) map that maintains the timestamps of the for-mer events at each pixel, and warp the later events to align the former events in the TS map. We measure the degree of alignment by a novel TS loss and update the motion param-eters by minimizing the TS loss. We observe that in a short time interval, the events triggered by the same world point differ slightly in the timestamps and the coordinates, yield-ing almost identical residuals and gradient directions when evaluating the TS loss. Therefore, we can greatly reduce the computational burden by evaluating the TS loss with a small fraction of events. In the alignment procedure, we attempt to warp the later events backward to the start timestamp tstart and align them with the former events. But few of the former events are triggered at tstart due to the spatial sparsity of the event camera. Moreover, these former events are normally trig-gered with a slight time shift from tstart, resulting in a slight drift in the coordinates. To fix this issue, we propose an iter-ative scheme to update the coordinates of the former events in the TS map with the latest motion parameters and pro-gressively evaluate the TS loss based on the latest TS map. In addition, the choice of event batch size has a signifi-cant impact on the accuracy. In practice, the batch size or the batch time interval is set manually, which is mainly de-termined by two constraints: one is that events in the batch should share the same motion parameters, i.e., the time in-terval of the batch must be short enough to hold the con-stant velocity assumption, and the other is that the batch must contain sufficient events for the algorithm to execute normally. Essentially, these two constraints are mutually exclusive, making it difficult to determine the global batch size or time interval. To address this problem, we propose a dynamic batch strategy that can dynamically adjust the batch size to ensure that the constant velocity assumption always holds in this batch. We slice unprocessed events into event bundles with a small size and append these event bundles into the event batch if they meet the requirement that their overlap ratio reaches a threshold; otherwise, we stop merging and output an event batch with a certain num-ber of bundles. With this strategy, our algorithm can adapt to the scenes under different conditions, such as different scene texture richness, camera motion speeds, and camera spatial resolutions, while for the fixed-size methods, they need to re-adjust the batch size to accommodate these scene changes. We summarize the contributions of this work in the fol-lowing. • We present a unified event-based motion estimation framework that progressively aligns events using a novel event-to-map scheme with spatio-temporal in-formation of sampled events.• We also propose a dynamic batch size strategy to ensure that the constant velocity assumption always holds, which is more generalizable to different scene textures, camera speeds, and camera resolutions com-pared to the fixed batch strategy. • Comprehensive experimental results demonstrate that our framework achieves state-of-the-art performance both in terms of accuracy and efficiency on publicly available datasets with three motion models. • By utilizing a small number of sampled events in each iteration, our framework is able to achieve real-time implementation for the rotational model and the 6-DOF model with standard CPUs.
Gillert_Iterative_Next_Boundary_Detection_for_Instance_Segmentation_of_Tree_Rings_CVPR_2023
Abstract We address the problem of detecting tree rings in mi-croscopy images of shrub cross sections. This can be re-garded as a special case of the instance segmentation task with several unique challenges such as the concentric cir-cular ring shape of the objects and high precision require-ments that result in inadequate performance of existing methods. We propose a new iterative method which we term Itera-tive Next Boundary Detection (INBD). It intuitively models the natural growth direction, starting from the center of the shrub cross section and detecting the next ring boundary in each iteration step. In our experiments, INBD shows supe-rior performance to generic instance segmentation methods and is the only one with a built-in notion of chronological order. Our dataset and source code are available at http://github.com/alexander-g/INBD .
1. Introduction Dendrochronology is the science that provides method-ologies to date tree rings [ 4], i.e. measuring and assigning calendar years to the growth rings present in a wood stem. By analyzing anatomical properties like ring widths or the cell sizes within the rings, dendrochronology can be ap-plied to dating archaeological manufactures, tracking tim-ber sources or reconstructing past climate conditions [ 11]. For climate reconstruction in the Arctic, shrubs consti-tute the most important source of dendrochronological in-formation, since they are the only woody plants able to thrive there [ 23]. As temperature is a limiting factor for shrub growth in the Arctic, it shows a strong relationship Figure 1. Example microscopy images (left) of shrub cross sec-tions from our new dataset and the outputs (right) of our proposed method INBD for instance segmentation of tree rings with climate, making these plants a reliable proxy to recon-struct past climate events [ 24]. Dendrochronological analy-ses on shrubs are usually performed on thin cross sections of branches or roots and observed under the microscope with a magnification that allows ring identification at a cellular level. As of now, ecological studies are limited in size by the amount of manual analysis work due to the lack of au-tomatic tree ring detection methods. With this paper we want to introduce this problem to the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14540 4800px(a)High resolution and faint boundaries 65px~5px ~5px (b)High precision requirements 7 123456111213 8109 (c)Wedging rings SAME RINGSAME RING (d)Disconnected rings Figure 2. Some of the challenges encountered in this task: (a) Boundaries inbetween tree rings are often hard to recognize. For example, this cross section contains 14 rings. (b) Crop of the previous image (indicated by the square) with overlayed annotation. A tree ring is only 65 pixels wide or ca 1.4% of the full cross section diameter. The cell wall that divides late summer cells and the next year’s early summer cells is only 5 pixels wide or 0.1%. (c) Wedging rings can complicate finding the chronologically correct next year ring. (d) Rings can grow in multiple disconnected parts from different sides. computer vision community and enhance the capabilities for ecological sciences. We release a new dataset containing high resolution microscopy images of shrub cross sections and propose a specialized method for growth ring identifica-tion. Example images from our dataset and corresponding outputs of our method are shown in Figure 1. From a com-puter vision point of view, this can be regarded as a special case of the instance segmentation task, however it differs from previous generic datasets in several ways which makes existing methods underperform. Figure 2illustrates these differences. For one, the con-centric ring shape of the instances can pose a significant obstacle, particularly for top-down methods because the ob-jects have almost identical bounding boxes. This gets com-plicated by the fact that year rings can also form incom-plete circles (wedging rings) and grow from only one side, or even in multiple disconnected parts from different sides (2d). Depending on the species, plant part and climatic con-ditions the amount of wedging rings can range from zero to being the majority. Assigning the correct order to wedging rings can be an issue where rings of more than 2 years touch each other ( 2c). Bottom-up methods on the other hand struggle with faint ring boundaries ( 2a) as the presence of the boundary pattern is not always constant throughout the whole stem circumference. They are prone to merging rings where no boundary can be detected or splitting them where the ring width is narrow. Next, the images are acquired at a high resolution ( 2a) to capture cellular information, yet a high degree of precision is required for the downstream task of assigning individual cells to the correct year. The thick-ness of a cell wall that is dividing the cells from one ring to another can be as low as 0.01% of the whole object ( 2b). Finally, as the preparation of samples and annotation of theimages is very costly, training has to be performed in a low data regime. We argue that a specialized approach can help to over-come those challenges and propose a new iterative method which we term Iterative Next Boundary Detection (INBD). In the first step, it performs semantic segmentation to detect basic features such as the background, center and the ring boundary pixels. From this starting point, it iteratively de-tects the next year ring’s boundaries, following the natural growth of the plant. This process is augmented with a re-current wedging ring detection module to counteract issues with incomplete rings. We compare our method with both top-down and bottom-up generic instance segmentation in our experiments in which it shows better results. Moreover, it is the first method that automatically assigns a chronolog-ical order to the detected objects. The contributions of this paper can summarized as fol-lows: • Publication of a new challenging dataset for a special case of instance segmentation. • Development of the specialized method INBD for tree ring instance segmentation. • Evaluation of previous generic instance segmentation methods and comparison with INBD
Buchner_Learning_and_Aggregating_Lane_Graphs_for_Urban_Automated_Driving_CVPR_2023
Abstract Lane graph estimation is an essential and highly challeng-ing task in automated driving and HD map learning. Exist-ing methods using either onboard or aerial imagery struggle with complex lane topologies, out-of-distribution scenar-ios, or significant occlusions in the image space. Moreover, merging overlapping lane graphs to obtain consistent large-scale graphs remains difficult. To overcome these challenges, we propose a novel bottom-up approach to lane graph esti-mation from aerial imagery that aggregates multiple over-lapping graphs into a single consistent graph. Due to its modular design, our method allows us to address two com-plementary tasks: predicting ego-respective successor lane graphs from arbitrary vehicle positions using a graph neural network and aggregating these predictions into a consistent global lane graph. Extensive experiments on a large-scale lane graph dataset demonstrate that our approach yields highly accurate lane graphs, even in regions with severe occlusions. The presented approach to graph aggregation proves to eliminate inconsistent predictions while increas-ing the overall graph quality. We make our large-scale urban lane graph dataset and code publicly available at http://urbanlanegraph.cs.uni-freiburg.de .
1. Introduction Most automated driving vehicles rely on the knowledge of their immediate surroundings to safely navigate urban environments. Onboard sensors including LiDARs and cam-eras provide perception inputs that are utilized in multiple tasks such as localization [7, 21, 27], tracking [4], or scene understanding [20, 24, 26, 37] to aggregate representations of the environment. However, robust planning and control typically require vastly more detailed and less noisy world models in the form of HD map data [12]. In particular, infor-mation on lane parametrization and connectivity is essential for both planning future driving maneuvers as well as high-level navigation tasks. Creating and maintaining HD maps in the form of lane graphs is a time-consuming and arduous *Equal contribution Figure 1. Our approach predicts accurate lane graphs from aerial images of complex urban environments. We visualize the estimated lane graph in magenta and indicate model initialization points with yellow circles. task due to the large amount of detail required in the anno-tation and the data curation process including map updates based on local environment changes such as construction sites. Previous approaches to lane graph estimation have shown shortcomings in predicting lane graphs due to multiple defi-ciencies: On the one hand, methods using onboard imagery typically degrade at complex real-world intersections and under significant occlusions, e.g., when following another vehicle [5, 6]. On the other hand, methods based on aerial imagery show reduced performance when confronted with occlusions in the bird’s-eye-view (BEV) due to, e.g., vege-tation or shadows, and suffer from catastrophic drift when unconstrained in out-of-distribution scenarios [30]. Previous works treat intersections and non-intersections inherently differently [15] and thus require elaborated heuristics and post-processing to merge single predictions into a consistent lane graph. Moreover, prior works do not focus on use cases where multiple predicted graphs must be merged into a sin-gle consistent solution, which is essential for enabling the automatic generation of highly detailed lane graphs of large contiguous regions. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13415 Related to the aforementioned challenges, we propose a novel two-stage graph neural network (GNN) approach termed LaneGNN that operates on single aerial color images for lane graph prediction. Inspired by methods in the field of trajectory prediction [8], we formulate a bottom-up ap-proach according to which we place a virtual agent into a local crop of the aerial image and predict reachable successor lane graphs from its positions. To transform multiple disjoint local solutions into a single global solution, we aggregate a global representation by iteratively inferring the lane graph from consecutive poses, ultimately imitating real-world driv-ing behavior. This iterative approach not only increases the predicted area covered but also improves graph accuracy based on data association and rejection. Note that we do not require any human in the loop to perform the graph aggre-gation. We visualize the output of our graph aggregation procedure in Fig. 1, in which we superimpose the predicted graph on the aerial image input. Using this framework, we envision two applications: ego-centered successor path pre-diction and full lane graph estimation by aggregation. To summarize, the main contributions of this work are: •An innovative bottom-up approach to lane graph estima-tion in challenging environments that explicitly encodes graph-level lane topology from input aerial images in a scenario-agnostic manner. •A novel graph aggregation scheme enabling robust and method-agnostic merging of graph-level predictions. •The large-scale lane graph dataset UrbanLaneGraph comprising high-resolution aerial images aligned with dense lane graph annotations aggregated from the Ar-goverse2 dataset that we make publicly available. •Extensive experiments and ablation studies demonstrat-ing the significance of our findings.
Jin_Video-Text_As_Game_Players_Hierarchical_Banzhaf_Interaction_for_Cross-Modal_Representation_CVPR_2023
Abstract Contrastive learning-based video-language representa-tion learning approaches, e.g., CLIP , have achieved out-standing performance, which pursue semantic interaction upon pre-defined video-text pairs. To clarify this coarse-grained global interaction and move a step further, we have to encounter challenging shell-breaking interactions for fine-grained cross-modal learning. In this paper, we creatively model video-text as game players with multivariate coopera-tive game theory to wisely handle the uncertainty during fine-grained semantic interaction with diverse granularity, flex-ible combination, and vague intensity. Concretely, we pro-pose Hierarchical Banzhaf Interaction (HBI) to value pos-sible correspondence between video frames and text words for sensitive and explainable cross-modal contrast. To effi-ciently realize the cooperative game of multiple video frames and multiple text words, the proposed method clusters the original video frames (text words) and computes the Banzhaf Interaction between the merged tokens. By stacking token merge modules, we achieve cooperative games at different semantic levels. Extensive experiments on commonly used text-video retrieval and video-question answering bench-marks with superior performances justify the efficacy of our HBI. More encouragingly, it can also serve as a visualization tool to promote the understanding of cross-modal interac-tion, which have a far-reaching impact on the community. Project page is available at https://jpthu17.github.io/HBI/.
1. Introduction Representation learning based on both vision and lan-guage has many potential benefits and direct applicability to *Corresponding author: Li Yuan, Jie Chen. 1/0 A woman in dress and a man in suit sit together. (a) Cross -modal contrastive modeling (b) Multivariate cooperative game modeling (Ours)Banzhaf Interaction : 0.7 0.0 0.6Figure 1. (a) Cross-modal contrastive methods only learn a global semantic interaction from the coarse-grained labels of video-text pairs. (b) We model cross-modal alignment as a multivariate co-operative game process. Specifically, we use Banzhaf Interaction to value possible correspondence between video frames and text words and consider it as an additional learning signal. cross-modal tasks, such as text-video retrieval [20, 32] and video-question answering [28, 49]. Visual-language learning has recently boomed due to the success of contrastive learn-ing [9 –11,19,48,61 –64],e.g., CLIP [40], to project the video and text features into a common latent space according to the semantic similarities of video-text pairs. In this manner, cross-modal contrastive learning enables networks to learn discriminative video-language representations. The cross-modal contrastive approach [14, 20, 32] typi-cally models the cross-modal interaction via solely the global similarity of each modality. Specifically, as shown in Fig. 1a, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2472 it only exploits the coarse-grained labels of video-text pairs to learn a global semantic interaction. However, in most cases, we expect to capture fine-grained interpretable infor-mation, such as how much cross-modal alignment is helped or hindered by the interaction of a visual entity and a textual phrase. Representation that relies on cross-modal contrastive learning cannot do this in a supervised manner, as manu-ally labeling these interpretable relationships is unavailable, especially on large-scale datasets. This suggests that there might be other learning signals that could complement and improve pure contrastive formulations. In contrast to prior works [20, 32, 45], we model cross-modal representation learning as a multivariate cooperative game by formulating video and text as players in a coop-erative game, as illustrated in Fig. 1b. Intuitively, if visual representations and textual representations have strong se-mantic correspondence, they tend to cooperate together and contribute to the cross-modal similarity score. Motivated by this spirit, we consider the set containing multiple represen-tations as a coalition, and propose to quantify the trend of cooperation within a coalition via the game-theoretic inter-action index, i.e., Banzhaf Interaction [18] for its simplicity and efficiency. Banzhaf Interaction is one of the most popu-lar concepts in cooperative games [33]. As shown in Fig. 2, it measures the additional benefits brought by the coalition compared with the costs of the lost coalitions of these players with others. When a coalition has high Banzhaf Interaction, it will also have a high contribution to the semantic similar-ity. Thus, we can use Banzhaf Interaction to value possible correspondence between video frames and text words for sensitive and explainable cross-modal contrast. To this end, we propose Hierarchical Banzhaf Interac-tion (HBI). Concretely, we take video frames and text words as players and the cross-modality similarity measurement as the characteristic function in the cooperative game. Then, we use the Banzhaf Interaction to represent the trend of cooperation between any set of features. Besides, to effi-ciently generate coalitions among game players, we propose an adaptive token merge module to cluster the original video frames (text words). By stacking token merge modules, we achieve hierarchical interaction, i.e., entity-level inter-actions on the frames and words, action-level interactions on the clips and phrases, and event-level interactions on the segments and paragraphs. In particular, we show that the Banzhaf Interaction index satisfies Symmetry ,Dummy ,Addi-tivity , and Recursivity axiom in Sec. 3.4. This result implies that the representation learned via Banzhaf Interaction has four properties that the features of the contrastive method do not. We find that explicitly establishing the fine-grained interpretable relationships between video and text brings a sensible improvement to already very strong video-language representation learning results. Experiment results on three text-video retrieval benchmark datasets ( MSRVTT [50], Ac-Players in the Coalitionawoman in dress a man in suitCoalition The lost Coalition Possible Coalition Player Banzhaf Interaction of Coalition [ , ] Benefits of Coalition [ , ] Costs of the lost Coalitions [ , ] awoman in dress a man in suit All Players in the Game Players outside the Coalition Figure 2. The intuition of Banzhaf Interaction in video-text representation learning. We refer the reader to Eq. 3 for the detailed formula. When some players (frames and words) form a coalition, we lose the coalitions of these players with others. In other words, the lost coalition is mutually exclusive from the target coalition. Banzhaf Interaction measures the difference between the benefits of the coalition and the costs of the lost coalitions. tivityNet Captions [21], and DiDeMo [1]) and the video question answering benchmark dataset ( MSRVTT-QA [49]) show the advantages of the proposed method. The main contributions are as follows: •To the best of our knowledge, we are the first to model video-language learning as a multivariate cooperative game process and propose a novel proxy training objec-tive, which uses Banzhaf interaction to value possible correspondence between video frames and text words for sensitive and explainable cross-modal contrast. •Our method achieves new state-of-the-art performance on text-video retrieval benchmarks of MSRVTT ,Activi-tyNet Captions andDiDeMo , as well as on the video-question answering task on MSRVTT-QA . •More encouragingly, our method can also serve as a visualization tool to promote the understanding of cross-modal interaction, which may have a far-reaching im-pact on the community.
Huang_Rethinking_Few-Shot_Medical_Segmentation_A_Vector_Quantization_View_CVPR_2023
Abstract The existing few-shot medical segmentation networks share the same practice that the more prototypes, the bet-ter performance. This phenomenon can be theoretically in-terpreted in Vector Quantization (VQ) view: the more pro-totypes, the more clusters are separated from pixel-wise feature points distributed over the full space. However, as we further think about few-shot segmentation with this perspective, it is found that the clusterization of feature points and the adaptation to unseen tasks have not received enough attention. Motivated by the observation, we propose a learning VQ mechanism consisting of grid-format VQ (GFVQ), self-organized VQ (SOVQ) and residual oriented VQ (ROVQ). To be specific, GFVQ generates the prototype matrix by averaging square grids over the spatial extent, which uniformly quantizes the local details; SOVQ adap-tively assigns the feature points to different local classes and creates a new representation space where the learn-able local prototypes are updated with a global view; ROVQ introduces residual information to fine-tune the aforemen-tioned learned local prototypes without re-training, which benefits the generalization performance for the irrelevance to the training task. We empirically show that our VQ framework yields the state-of-the-art performance over ab-domen, cardiac and prostate MRI datasets and expect this work will provoke a rethink of the current few-shot medical segmentation model design. Our code will soon be publicly available.
1. Introduction Semantic segmentation is one of the fundamental tasks in medical imaging applications, e.g., disease diagnosis [1, 2], monitoring [3,4], and screening [5]. With sufficient labeled data being fed into the deep network, segmentation models can achieve promising results. However, in most practical scenarios, the segmentation models often suffer from the *Correspondence to: Tingfa Xu and Jianan Li. Figure 1. Schematic diagram of different clustering representa-tion schemes of few-shot medical segmentation: (a) the basic VQ with each class represented by a prototype vector; (b) the GFVQ, i.e., the existing local prototype generation, extracts prototype ar-ray via mobile pooling window; (c) the proposed SOVQ assigns the pixel-wise features to multiple local classes adaptively; (d) the proposed ROVQ fine-tunes the learned prototype vectors param-eterlessly to enhance the adaption performance to unseen tasks. The arrows denote the more accurate edge in (d). lack of required data due to the expensive cost of expertise dense annotations and limited number of abnormal organ and rare lesion samples. Recently, few-shot medical image segmentation has been widely studied to reduce the requirement for large-scale datasets with dense annotations [6–8]. Currently, the common inference paradigm of few-shot medical image segmentation is to encode a prototype to represent the novel class appearing in the support image (Fig. 1(a)) and com-pute the similarity with query features to perform segmen-tation [9–12]. The essential work of such a framework lies in prototype learning, which is carried out only by the fea-ture encoder. This encoder is learned with training tasks in the training stage and generalized to unseen tasks in the test-ing stage. From a vector quantization (VQ) view, the pro-totype vectors representing different classes can be consid-ered as the known sample points in a coding space, and the pixel-wise query feature points are supposed to be classified by decision boundaries determined by the known support points [13–15]. In this view, the prototype learning problem is rethought to be a VQ optimization problem and the pro-totype vectors learned from support features are thought to serve as the support vectors delineating the encoding space for query features. Therefore, the aim of prototypical few-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3072 shot segmentation task translates into the requirements for the prototype vectors learned by VQ: discriminative repre-sentation and strong generalization. The requirement for discriminative representation is of concern to many researchers as the prototype generation strategy. Ouyang et al. [10] applied non-overlapping pool-ing windows to support features generating multiple local prototypes; Yu et al. [11] extracted prototype arrays in the presence of grid constraint and performed a location-guided comparison; Li et al. [12] designed a registration mecha-nism to align local prototypes between support and query features. The aforementioned schemes can be summarized intuitively that the more prototypes, the better the segmen-tation performance. However, experiments show that as the number of prototypes increases, the performance deterio-rates: on one hand, the set of pooling prototypes reaches saturation of representation capacity; on the other hand, too many prototypes cannot distinguish between classes, result-ing in blurred edges or even misclassification. Unlike the requirement for discriminative representation, requirement for strong generalization is often ignored by the previous works in prototype learning. To improve the generaliza-tion capability, most researches adopt a unified lightweight encoding network to simultaneously process support and query images [7, 16]. However, few efforts have put gen-eralization studies on prototype learning. To meet the requirement for discriminative representa-tion, we detail two sub-requirements, i.e.,❶the clustering of feature points and ❷the embedding of prototype vec-tors. Considering this two sub-requirements, we propose a self-organized vector quantization (SOVQ) method, in-spired by self-organized mapping algorithm [17, 18], con-taining a self-organized clustering (SOC) and a local map-ping (LM). To abstract features more exactly, SOVQ first creates a new neuron representation space, where neurons are initialized as prototypes and arranged in a normative ar-ray. Then the feature points are assigned to different neu-rons adaptively (for ❶), and the learnable neurons are opti-mized by the features collaboratively (for ❷). Through it-erative learning, the feature points are clustered reasonably and each cluster is represented by a neuron with a global view (Fig. 1(c)). Furthermore, LM strategy is designed to remap neurons to the encoding space ensuring the prototypes and query features are embedded consistently. Each neuron is inter-preted as a weighted sum of GFVQ prototypes via inverse distance weighting and interpolated to GFVQ forming a topologically prototype layout. In summary, through self-organizing the feature points in an unsupervised manner, SOVQ fits the space of interest. The requirement for strong generalization is also divided into two sub-requirements: ❸to avoid overfitting to train-ing tasks and ❹to adapt the model to testing tasks. Thusa residual oriented vector quantization (ROVQ) is put for-ward, which introduces the residual connection to final vec-tor layout and fine-tunes the learned vectors. On the one hand, the parameter-free learning acts as a regularization term in the training phase to prevent overfitting (for ❸); on the other hand, the residual information with labels guides the prototype vector to get closer to its inherent characteris-tics and differentiate from other classes (for ❹), which con-tributes to maintaining details and forming a clearer edge (Fig. 1(d)). Additionally, following the earlier works on multiple prototype generation, we employ a grid-format vector quan-tization (GFVQ) to obtain a compressed feature points. As shown in Fig. 1(a), the features are rasterized in grid and compressed by average pooling. Although GFVQ and SOVQ both extract prototypes representing local features, SOVQ is equipped with global receptive field and provides a more specific division of the feature space, while GFVQ is restricted in its grid-format receptive field. Overall, the medical prototypical few-shot segmenta-tion task is formalized as the vector quantization learn-ing for few-shot class representing. To satisfy the require-ment for strong representation and generalization, i.e., sub-requirements ❶-❹, we propose a learning VQ mechanism: a dual structure is employed to integrate GFVQ and SOVQ generating well-representative and limited-quantity proto-type vector set, and the former serves as compressed fea-ture reference for LM of SOVQ. Then the prototype set is fine-tuned with ROVQ to maintain the detailed informa-tion and enhance generalization capability, and finally the dense prediction is performed by similarity measurement. We show our method achieves the state-of-the-art perfor-mance on Abdomen, Cardiac and Prostate MR images with extensive experiments.
Bober-Irizar_Architectural_Backdoors_in_Neural_Networks_CVPR_2023
Abstract Machine learning is vulnerable to adversarial manipula-tion. Previous literature demonstrated that at the training stage attackers can manipulate data [ 14] and data sampling procedures [ 29] to control model behaviour. A common at-tack goal is to plant backdoors i.e. force the victim model to learn to recognise a trigger known only by the adversary. In this paper, we introduce a new class of backdoor attacks that hide inside model architectures i.e. in the inductive bias of the functions used to train. These backdoors are simple to implement, for instance by publishing open-source code for a backdoored model architecture that others will reuse unknowingly. We demonstrate that model architectural back-doors represent a real threat and, unlike other approaches, can survive a complete re-training from scratch. We for-malise the main construction principles behind architectural backdoors, such as a connection between the input and the output, and describe some possible protections against them. We evaluate our attacks on computer vision benchmarks of different scales and demonstrate the underlying vulnerability is pervasive in a variety of common training settings.
1. Introduction The Machine Learning (ML) community now faces a threat posed by backdoored neural networks; models which are intentionally modified by an attacker in the supply chain to insert hidden behaviour [ 3,14]. A backdoor causes a network’s behaviour to change arbitrarily when a specific se-cret ‘trigger’ is present in the model’s input, while behaving as the defender intended when the trigger is absent (retain-ing a high evaluation performance). The vast majority of current backdoor attacks in the literature work by changing the trained weights of models [ 14,15] – here the backdoor *University of Cambridge, UK. †Vector Institute, CA. ‡University of Oxford, UK. §Imperial College London, UK. ¶University of Toronto, CA.is planted into the parameters during training of the neural network. This can be done directly (i.e. modify the values of the weights directly [ 12,15]), or indirectly by sampling adversarially [ 29] and modifying training data [ 14]. This means that when the weights are later modified by another party (e.g. through fine-tuning), the backdoor could feasibly be removed or weakened [ 34]. When the weights provided by an attacker are discarded entirely (e.g. through re-training from scratch on a new dataset), any embedded backdoor would of course naturally be discarded. However, the performance of a neural network depends not only on its weights but also its architecture (the composi-tion and connections between layers in the model). Research showed that, when given sufficient flexibility, the neural net-work architectures themselves can be pre-disposed to certain outcomes [ 11,38]. The network architectures can be seen as an inductive bias of the ML model. This raises a new question: Can the network architectures themselves be modified to hide backdoors? In this paper we investigate if an adversary can use neural network architectures to perform backdoor attacks, forcing the model to become sensitive to a specific trigger applied to an image. We demonstrate that if an attacker can slightly manipulate the architecture only using common components they can introduce backdoors that survive re-training from scratch on a completely new dataset i.e. making these model backdoors weights-and dataset-agnostic. We describe a way to construct such Model Architecture Backdoors (MAB) and formalize their requirements. We find that architectural backdoors need to: (1)operate directly on the input and link the input to its output; (2)(ideally) have a weight-agnostic implementation; (3)have asymmetric components to launch targeted attacks. We demonstrate how such requirements make MAB detection possible and show that without these requirements, the learned backdoors will struggle to survive re-training. We make the following contributions: •We show a new class of backdoor attacks against neural networks, where the backdoor is planted inside of the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24595 model architecture; •We demonstrate how to build architectural backdoors for three different threat models and formalise the re-quirements for their successful operation; •We demonstrate on a number of benchmarks that un-like previous methods that rely on weights [ 14,15], backdoors at the architecture level survive retraining.
Huang_Parametric_Implicit_Face_Representation_for_Audio-Driven_Facial_Reenactment_CVPR_2023
Abstract Audio-driven facial reenactment is a crucial technique that has a range of applications in film-making, virtual avatars and video conferences. Existing works either em-ploy explicit intermediate face representations ( e.g., 2D fa-cial landmarks or 3D face models) or implicit ones ( e.g., Neural Radiance Fields), thus suffering from the trade-offs between interpretability and expressive power, hence be-tween controllability and quality of the results. In this work, we break these trade-offs with our novel parametric implicit face representation and propose a novel audio-driven fa-cial reenactment framework that is both controllable and can generate high-quality talking heads. Specifically, our parametric implicit representation parameterizes the im-plicit representation with interpretable parameters of 3D face models, thereby taking the best of both explicit and im-plicit methods. In addition, we propose several new tech-niques to improve the three components of our framework, including i) incorporating contextual information into the audio-to-expression parameters encoding; ii) using condi-tional image synthesis to parameterize the implicit repre-sentation and implementing it with an innovative tri-plane structure for efficient learning; iii) formulating facial reen-actment as a conditional image inpainting problem and proposing a novel data augmentation technique to improve model generalizability. Extensive experiments demonstrate that our method can generate more realistic results than previous methods with greater fidelity to the identities and talking styles of speakers.
1. Introduction Audio-driven facial reenactment, also known as audio-driven talking head generation or synthesis, plays an im-portant role in various applications, such as digital human, film-making and virtual video conference. It is a challeng-ing cross-modal task from audio to visual face, which re-quires the generated talking heads to be photo-realistic and *Corresponding author is Guanbin Li. Input Audio (a) Explicit Representa�on (b) Implicit Representa�on (c) Parametric Implicit Representa�on (Ours) Parameter (a) ER (b) IR (c) PIR (Ours)Interpretability Expressive Power Strong Strong Weak Input Audio Input Audio Parameter Figure 1. Comparison between previous explicit, implicit repre-sentations and our parametric implicit representation (PIR). (a) Explicit representations ( e.g., 3D face models) have interpretable parameters but lack expressive power. (b) Implicit representa-tions ( e.g., NeRF) have strong expressive power but are not in-terpretable. (c) Our PIR takes the best of both approaches and is both interpretable and expressive, thus paving the way for control-lable and high-quality audio-driven facial reenactment. have lip movements synchronized with the input audio. According to the intermediate face representations, ex-isting facial reenactment methods can be roughly classi-fied into two categories: explicit and implicit methods. Between them, explicit methods [5, 18, 27, 29, 30, 34, 37, 40, 44] exploit relatively sophisticated 2D ( e.g., 2D facial landmarks [5, 18, 29, 34, 44]) or 3D ( e.g., 3D Morphable Model [27, 30, 37, 40]) parametric face models to recon-struct 2D or 3D faces, and map them to photo-realistic faces with a rendering network such as the Generative Adversar-ial Networks (GANs) [32, 39]. Their distinct advantage is This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12759 the controllability ( e.g., expressions) resulting from their in-terpretable facial parameters. However, despite this advan-tage, the parametric face models used in explicit methods are often sparse and have very limited expressive power, which inevitably sacrifices the quality of synthesized faces (e.g., the inaccurate lip movements and blurry mouth caused by the missing teeth area in 3D face models). In contrast, implicit methods [12, 16, 17, 24, 25, 28, 42, 43] use implicit 2D or 3D representations that are more expressive and can generate more realistic faces. For example, Neural Radi-ance Fields (NeRF) based methods [12, 17, 25] are one of the more representative implicit methods that use NeRF to represent the 3D scenes of talking heads. Although being more expressive and producing higher-quality results, im-plicit methods are not interpretable and lose the control-lability of the synthesis process, thus requiring model re-training to change its target person. As a result, the explicit and implicit methods mentioned above form a trade-off be-tween the interpretability andexpressive power of interme-diate face representations, while a representation that is both interpretable and expressive remains an open problem. In this work, we break the above trade-off by propos-ing a parametric implicit representation that is both inter-pretable and expressive, paving the way for controllable and high-quality audio-driven facial reenactment. Specif-ically, we propose to parameterize implicit face represen-tations with the interpretable parameters of the 3D Mor-phable Model (3DMM) [10] using a conditional image syn-thesis paradigm. In our parametric implicit representation, the 3DMM parameters offer interpretability and the implicit representation offers strong expressive power, which take the best of both explicit and implicit methods (Fig. 1). To implement our idea, we propose a novel framework consist-ing of three components: i) contextual audio to expression (parameters) encoding; ii) implicit representation parame-terization; iii) rendering with parametric implicit represen-tation. Among them, our contextual audio to expression encoding component employs a transformer-based encoder architecture to capture the long-term context of an input audio, making the resulting talking heads more consistent and natural-looking; our implicit representation parameter-ization component uses a novel conditional image synthe-sis approach for the parameterization, and innovatively em-ploys a tri-plane based generator offered by EG3D [3] to learn the implicit representation in a computationally effi-cient way; our rendering with parametric implicit represen-tation component formulates face reenactment as an image inpainting problem conditioned on the parametric implicit representation to achieve a consistent and natural-looking “blending” of the head and torso of a target person. In addi-tion, we observe that the model slightly overfits to the train-ing data consisting of paired audio and video, causing jitters in the resulting talking heads whose lip movements are re-quired to be synchronized with unseen input audio. To help our model generalize better and produce more stable results, we further propose a simple yet effective data augmentation strategy for our rendering component. In summary, our main contributions include: • We propose an innovative audio-driven facial reen-actment framework based on our novel paramet-ric implicit representation, which breaks the previ-ous trade-off between interpretability and expressive power, paving the way for controllable and high-quality audio-driven facial reenactment. • We propose several new techniques to improve the three components of our innovative framework, in-cluding: i) employing a transformer-based encoder ar-chitecture to incorporate contextual information into the audio to expression (parameters) encoding; ii) us-ing a novel conditional image synthesis approach for the parameterization of implicit representation, which is implemented with an innovative tri-plane based gen-erator [3] for efficient learning; iii) formulating facial reenactment as a conditional image inpainting problem for natural “blending” of head and torso, and propos-ing a simple yet effective data augmentation technique to improve model generalizability. • Extensive experiments show that our method can gen-erate high-fidelity talking head videos and outperforms state-of-the-art methods in both objective evaluations and user studies.
Chang_Making_Vision_Transformers_Efficient_From_a_Token_Sparsification_View_CVPR_2023
Abstract The quadratic computational complexity to the number of tokens limits the practical applications of Vision Trans-formers (ViTs). Several works propose to prune redundant tokens to achieve efficient ViTs. However, these methods generally suffer from (i) dramatic accuracy drops, (ii) ap-plication difficulty in the local vision transformer, and (iii) non-general-purpose networks for downstream tasks. In this work, we propose a novel Semantic Token ViT (STViT), for efficient global and local vision transformers, which can also be revised to serve as backbone for downstream tasks. The semantic tokens represent cluster centers, and they are initialized by pooling image tokens in space and recovered by attention, which can adaptively represent global or local semantic information. Due to the cluster properties, a few semantic tokens can attain the same effect as vast image to-kens, for both global and local vision transformers. For in-stance, only 16 semantic tokens on DeiT-(Tiny,Small,Base) can achieve the same accuracy with more than 100% in-ference speed improvement and nearly 60% FLOPs reduc-tion; on Swin-(Tiny,Small,Base), we can employ 16 seman-tic tokens in each window to further speed it up by around 20% with slight accuracy increase. Besides great success in image classification, we also extend our method to video recognition. In addition, we design a STViT-R(ecovery) net-work to restore the detailed spatial information based on the STViT, making it work for downstream tasks, which is powerless for previous token sparsification methods. Ex-periments demonstrate that our method can achieve com-petitive results compared to the original networks in object detection and instance segmentation, with over 30% FLOPs reduction for backbone.
1. Introduction In contrast to standard Convolutional Neural Networks (CNNs) approaches which process images pixel-by-pixel, *Work done during an internship at Alibaba Group. †Equal corresponding authors. ‡Work done at Alibaba Group, and now affiliated with Amazon.Vision Transformers (ViTs) [15, 26, 35, 36, 43] treat an im-age as a sequence of patch/image tokens, and have shown promising performance in prevalent visual recognition sce-narios. However, these superior performances do not come for free: the quadratic computational complexity to the number of image tokens limits their application in practice. Previous works [33, 56] have illustrated the large amount of redundancy in the image tokens and also shown the ef-fect of filtering out unimportant tokens normally according to predefined scoring mechanism. However, these meth-ods face the following challenges. Firstly, the predefined scoring mechanisms for filtering are generally imprecise. In Figure 1, on the left we visualize the class token val-ues in different layers which are commonly used to score the token importance [16,24, 45]. Different layers have dif-ferent value distributions, thus using these imprecise scores for filtering would lead to unsatisfactory performance. For example, EViT [24] has an accuracy drop of 1.3% when saving 50% FLOPs on DeiT-S [35]. Secondly, the remain-ing tokens do not distribute evenly in space any more, mak-ing them hard to work in local vision transformers1. Finally, large-scale token pruning tremendously damages the spatial structure and positional information, and causes difficulties when applied to downstream tasks, which they do not pro-pose a solution to deal with. To solve these problems, we propose Semantic Token ViT (STViT), for efficient global and local vision trans-formers, which also can be revised to serve as backbone for downstream tasks. The proposed approach is based on the following observations: (i) unlike local CNNs which learn spatial structure of images, vision transformer discretizes feature map as tokens for global feature exploration, re-lieving the requirements for maintaining the whole image structure and information; (ii) discrete tokens are more ben-eficial for optimization [38]; (iii) in Figure 1, on the right shows the attention maps in different transformer layers, and there are only several vertical lines in the deep lay-ers, which means that only a few tokens with global se-1In this paper, we define the vision transformer with global self-attention (like DeiT) as global vision transformer and the vision trans-former with local self-attention (like Swin) as local vision transformer. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6195 Figure 1. Left: the attention values of class tokens (normalized and reshaped in image shape) in different self-attention layers. Right: the attention maps in different self-attention layers. Zoom-in for better visibility. mantic information matter. Thus, we argue that it is not necessary to maintain massive structured tokens for ViTs, especially in the deep layers. Employing a few discrete to-kens with high-level semantic information can potentially achieve both high performance and efficiency. In STViT, the semantic tokens represent the cluster cen-ters, and the number of them is far less than the original im-age tokens, significantly reducing the computational cost. Inspired by the fact that multi-head attention can conduct the cluster center recovery (Supplementary A.7), we only employ the off-the-shelf self-attention to generate the se-mantic tokens. Specifically, the first few transformer layers are kept unchanged to obtain the image tokens with low-level features. The image tokens are then fed into our se-mantic token generation module (STGM) consisting of at least two transformer layers to generate semantic tokens. In each self-attention layer, the semantic tokens are input as queries, and the image tokens are fed as keys and val-ues. The semantic tokens dynamically aggregate image to-kens through the attention layers to recover cluster centers. In the first attention layer, the semantic tokens are initial-ized by an intra and inter-window spatial pooling which takes into account incorporating semantic information in each window and maximizing distance between adjacent windows. Thanks to this spatial initialization, the semantic tokens mainly incorporate local semantic information and achieve discrete and uniform distribution in space. In the following attention layer, besides further clustering, the se-mantic tokens are equipped with global cluster centers, and the network can adaptively select partial semantic tokens to focus on global semantic information. After the STGM, the original image tokens are discarded, and only seman-tic tokens are kept for the subsequent transformer layers. Because the generation of semantic tokens is flexible and space-aware, our method can be plugged into both global and local vision transformers. The semantic tokens can be produced in each window for the local vision transformer. Another property of STViT is its capability to serve as a backbone for downstream tasks, such as object detection and instance segmentation. Discussions have been miss-ing in previous methods [16, 24, 32, 45, 56] about how to use them in downstream task under the massive loss of spatial information during the token sparsification process, which actually seriously impedes the application of their method. Instead, we design a novel STViT-R network basedon STViT where a recovery module and dumbbell unit are adopted to periodically restore the full resolution feature map while the intermediate transformer layers continue to use semantic tokens to save computation cost, making our method work in downstream task. The effectiveness of the proposed method is validated via a comprehensive empirical study on image and video ViT models. Only 16 semantic tokens on DeiT-(Tiny, Small, Base) achieve nearly 50% inference time reduction without any accuracy degradation; on Swin-(Tiny, Small, Base), we also improve the inference throughput by nearly 20% with slight accuracy increase. Moreover, the proposed STViT-R achieves promising results on object detection and instance segmentation. To the best of our knowledge, this is one of first works to apply the token sparsification algorithm in lo-cal vision transformers, and use the ViTs as backbones in downstream tasks after large-scale token pruning. Our find-ings in ViTs uncover that maintaining the full-size feature map is unnecessary, and a few tokens with high-level se-mantic representations can achieve both high performance and efficiency. Thanks to its simplicity and general-purpose ability, our method can also serve as a new efficient ViT baseline architecture and a starting point for further research from the token sparsification perspective.
Jin_RefCLIP_A_Universal_Teacher_for_Weakly_Supervised_Referring_Expression_Comprehension_CVPR_2023
Abstract Referring Expression Comprehension (REC) is a task of grounding the referent based on an expression, and its de-velopment is greatly limited by expensive instance-level an-notations. Most existing weakly supervised methods are built based on two-stage detection networks, which are computationally expensive. In this paper, we resort to the efficient one-stage detector and propose a novel weakly su-pervised model called RefCLIP . Specifically, RefCLIP re-defines weakly supervised REC as an anchor-text matching problem, which can avoid the complex post-processing in existing methods. To achieve weakly supervised learning, we introduce anchor-based contrastive loss to optimize Re-fCLIP via numerous anchor-text pairs. Based on RefCLIP , we further propose the first model-agnostic weakly super-vised training scheme for existing REC models, where Ref-CLIP acts as a mature teacher to generate pseudo-labels for teaching common REC models. With our careful designs, this scheme can even help existing REC models achieve better weakly supervised performance than RefCLIP , e.g., TransVG and SimREC. To validate our approaches, we con-duct extensive experiments on four REC benchmarks, i.e., RefCOCO, RefCOCO+, RefCOCOg and ReferItGame. Ex-perimental results not only report our significant perfor-mance gains over existing weakly supervised models, e.g., +24.87% on RefCOCO, but also show the 5x faster infer-ence speed. Project: https://refclip.github.io.
1. Introduction Referring Expression Comprehension (REC), also known as visual grounding [5, 16], aims to locate the target instance in an image based on a referring expres-*Equal Contribution. †Corresponding Author. 123123 456 789Prediction (Pseudo Label) Anchor Points “Person on middle” 2Text Encoder Visual Encoder REC Models … “Person on middle” Pseudo -label learning CandidatesBest Anchor (a) RefCLIP (b) Weakly -supervised Training SchemeRegression Loss Confidence LossOne-stage LossAnchor -text matching SimREC RealGIN TransVG superviseFigure 1. Illustration of the proposed RefCLIP and weakly-supervised training scheme. RefCLIP selects the target bounding box from YOLOv3 via anchor-text matching, which is optimized by anchor-based contrastive learning. Our training scheme uses RefCLIP as a mature teacher to supervise common REC models , which requires no network modifications. sion [25–27, 42, 48]. As a cross-modal recognition task, REC is not limited to a fixed set of object categories and is theoretically capable of any open-ended detection [45]. These appealing properties give REC increasing attention from the community of computer vision [25, 28, 45–48]. However, the expensive instance-level annotation has long plagued its development. To this end, recent progress has been devoted to the re-search of weakly supervised REC models, which aim to learn detection based merely on language information [7, 38, 43]. Specifically, existing methods extend the two-stage object detector like Faster-RCNN [37] to a weakly super-vised REC model. In terms of methodology, they regard the REC as a region-text ranking problem, where the salient re-gions of an image are first extracted by Faster-RCNN and then ranked via cross-modal matching. To achieve weakly supervised training, they only use expressions as supervi-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2681 sion information and optimize the ranking modules via se-mantic reconstruction [19,20,38] or cross-modal contrastive learning [7, 43]. However, these methods are often inferior in inference speed due to the use of Faster-RCNN. To overcome these limitations, we resort to one-stage de-tectors for weakly supervised REC. Compared with Faster-RCNN, one-stage detectors like YOLOv3 [36] have obvi-ous advantages in efficiency, but it is intractable to directly adapt them to existing weakly supervised schemes. Above all, existing one-stage detectors [17, 36] predict the bound-ing boxes based on the features of the last few convolution layers, also known as anchor points [36]. In terms of multi-scale detection, thousands of bounding boxes will be pre-dicted for an image, so transforming them into region fea-tures becomes more time consuming1. However, we notice that the receptive field of convolution features will be much larger than the actual areas they represent [29], suggesting that an anchor point in the one-stage detector may contain enough information for recognition. Motivated by the above observations, we define weakly supervised REC as an anchor-text matching problem and propose a novel weakly supervised model named RefCLIP . Specifically, we change the task definition from which de-tected region is the referent towhich anchor point has the target bounding box . In this case, we can directly rank an-chor points without complex post-processing like ROI pool-ing and NMS [37]. To achieve weakly supervised learning, RefCLIP performs anchor-based contrastive learning inter and intra images, thereby learning vision-language align-ments via numerous anchor-text pairs. Notably, this con-trastive learning scheme also exhibits superior flexibility in negative sample augmentation, which is not constrained by the batch size. In this paper, we also focus on the model-agnostic train-ing scheme for weakly supervised REC. Including Ref-CLIP, all existing solutions are model-specific, which can not directly generalize to existing supervised REC mod-els [5, 25, 42, 45]. To this end, we further propose the first model-agnostic weakly supervised training scheme for REC. Specifically, we use RefCLIP as a teacher to produce pseudo-labels, i.e., bounding boxes, to supervise common REC models. Meanwhile, we also alleviate the confirma-tion bias [1] caused by pseudo-label noise via EMA [39] and data augmentation [13]. In this scheme, existing REC models can be weakly trained without any modification, which makes our work greatly different from the existing ones [7, 18–20, 38]. To validate the proposed RefCLIP and weakly su-pervised training scheme, we conduct extensive experi-ments on four REC benchmarks, i.e., RefCOCO [32], Ref-COCO+ [32], RefCOCOg [30] and ReferItGame [10], and 1With confidence filtering, this processing still requires about 26.6% additional computation on COCO images.compare with a bunch of latest weakly supervised REC models [18, 22, 38, 41]. We apply our training scheme to several representative REC models including RealGIN [45], TransVG [5] and SimREC [25]. Experimental results show obvious performance gains of our RefCLIP over existing weakly supervised REC models, e.g., +21.25% on Ref-COCO. Meanwhile, with our careful designs, the proposed training scheme can even help these REC models obtain new SOTA performance of weakly supervised REC. Conclusively, our main contributions are three-fold: • We propose a novel one-stage contrastive model called RefCLIP, which achieves weakly supervised REC via anchor-based cross-modal contrastive learning and significantly improves the inference speed by 5 times. • We propose the first generic weakly supervised train-ing scheme for common REC models, which can effec-tively boost any REC model using pseudo-labels gen-erated by our RefCLIP. • The proposed RefCLIP outperforms existing ap-poroaches on four benchmarks, and our training scheme also helps previous REC models obtain new weakly supervised SOTA performance.
Guirguis_NIFF_Alleviating_Forgetting_in_Generalized_Few-Shot_Object_Detection_via_Neural_CVPR_2023
Abstract Privacy and memory are two recurring themes in a broad conversation about the societal impact of AI. These con-cerns arise from the need for huge amounts of data to train deep neural networks. A promise of Generalized Few-shot Object Detection (G-FSOD), a learning paradigm in AI, is to alleviate the need for collecting abundant training sam-ples of novel classes we wish to detect by leveraging prior knowledge from old classes (i.e., base classes). G-FSOD strives to learn these novel classes while alleviating catas-trophic forgetting of the base classes. However, existing ap-proaches assume that the base images are accessible, an assumption that does not hold when sharing and storing data is problematic. In this work, we propose the first data-free knowledge distillation (DFKD) approach for G-FSOD that leverages the statistics of the region of interest (RoI) features from the base model to forge instance-level fea-tures without accessing the base images. Our contribution is three-fold: (1) we design a standalone lightweight gener-ator with (2) class-wise heads (3) to generate and replay di-verse instance-level base features to the RoI head while fine-tuning on the novel data. This stands in contrast to standard DFKD approaches in image classification, which invert the entire network to generate base images. Moreover, we make careful design choices in the novel finetuning pipeline to regularize the model. We show that our approach can dra-matically reduce the base memory requirements, all while setting a new standard for G-FSOD on the challenging MS-COCO and PASCAL-VOC benchmarks.
1. Introduction Object detection (OD) is an integral element in mod-ern computer vision perception systems (e.g., robotics and self-driving cars). However, object detectors [1–8] require abundant annotated data to train, which is labor and time intensive. In some applications requiring rare class de-tection, collecting much data is challenging. Striving to learn in limited data scenarios, few-shot object detection (FSOD) [9] is an uprising field. It mimics the human cog-nitive ability by leveraging prior knowledge from previous †Authors have equally contributed to this work. Corresponding author: [email protected] Figure 1. The base memory requirements for G-FSOD are dra-matically reduced by our framework, while improving the overall detection performance on MS-COCO ( 10-shot). We only store a lightweight generator that synthesizes deep features for the RoI head. BF denotes base-data free finetuning. experiences with abundant base data, to rapidly learn novel classes from limited samples. Despite the success of meta-learning [10–15] and transfer learning [16–19] paradigms in FSOD, most methods prioritize the detection performance of the novel classes while ignoring the catastrophic forget-ting of the base ones. This might lead to critical failure cases in real-life operational perception systems. To address the aforementioned concern, generalized few-shot object detection (G-FSOD) has been introduced to jointly detect the base and novel classes. One of the first ap-proaches to address the G-FSOD task was TFA [16], which finetunes the detector using a balanced set of base and novel class samples while freezing the backbone and the region proposal network (RPN). While this has reduced forgetting, the performance on novel classes has dropped significantly. Since then, a plethora of works have attempted to improve the overall detection performance. DeFRCN [19] proposed a gradient manipulation approach to modify the RPN and RoI head gradients. Retentive R-CNN [22], a knowledge distillation approach and CFA [23], a gradient manipula-tion approach were proposed to explicitly tackle the catas-trophic forgetting of base classes. However, all the above approaches rely on the assumption that base data is avail-able while learning the new classes. This made us raise the following question: How to alleviate forgetting without base data in case of a memory constraint or privacy con-cerns restricting the storage and replay of old base data? This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24193 Classifier RoI HeadRecord statistics Record statisticsFigure 2. Top: Standard DFKD approach via inverting the entire model [20, 21]. Bottom: An overview of our proposed approach. We abstractly show a few layers in both models. The main dif-ferences are: (1) we synthesize features instead of images, (2) we use a separate generator instead of inverting the model, and (3) we record the class-wise statistics (instead of the full data statistics) before and after the normalization layers in the RoI head. Data-free knowledge distillation (DFKD) is a line of work sharing a similar interest in transferring knowledge without storing raw data. DeepDream [24] pioneered model inversion (MI) work which inverted a pre-trained classi-fier to generate synthetic images for knowledge distilla-tion. Since then, various works [20, 21, 25] have followed attempting to generate higher-fidelity images and even in a class-incremental setting [21]. Despite its success in image classification, applying DFKD in G-FSOD comes with several challenges. First, most works revolve around generating synthetic images via MI. In the context of OD and G-FSOD, this entails generating images with bound-ing boxes which inflicts higher computational and memory overhead. Although a recent approach DIODE [25] has ap-plied MI in OD, it cannot be extended to G-FSOD for the following reason. Similar to all the previously mentioned works in DFKD, DIODE needs the statistics of the Batch-Norm(BN) [26] layers which are trained on the detection datasets. However, the backbone in G-FSOD is pre-trained on ImageNet and frozen (except the last ResBlock) during the entire training (unfreezing would change the mature pa-rameters and will reduce the overall performance). Hence, the running means and variances in the BN do not represent the true base data distribution. Contribution: In this work, we propose Neural Instance Feature Forging (NIFF), the first data-free knowledge distil-lation method for G-FSOD. We aim to alleviate forgetting without storing base data to respect privacy restrictions and reduce the overall memory footprint, as shown in Fig. 1.Our two key insights are as follows. First, we show that the statistics of instance-level RoI head features sufficiently represent the distribution of base classes. Second, we show that a standalone lightweight generator can be trained in a distillation fashion to match the gathered statistics and syn-thesize class-wise base features to train a G-FSOD model. This stands in contrast to MI approaches which optimize the pre-trained model to synthesize high-fidelity images. Our contributions are summarized as follows: 1. We forge instance-level features instead of synthesiz-ing images (Fig. 2) as the feature space ( 1024×7×7) is much smaller than the image space ( 3×600×1000 ).
Huang_Improving_Table_Structure_Recognition_With_Visual-Alignment_Sequential_Coordinate_Modeling_CVPR_2023
Abstract Table structure recognition aims to extract the logical and physical structure of unstructured table images into a machine-readable format. The latest end-to-end image-to-text approaches simultaneously predict the two struc-tures by two decoders, where the prediction of the physi-cal structure (the bounding boxes of the cells) is based on the representation of the logical structure. However, the previous methods struggle with imprecise bounding boxes as the logical representation lacks local visual informa-tion. To address this issue, we propose an end-to-end se-quential modeling framework for table structure recogni-tion called VAST . It contains a novel coordinate sequence decoder triggered by the representation of the non-empty cell from the logical structure decoder. In the coordinate se-quence decoder, we model the bounding box coordinates as a language sequence, where the left, top, right and bottom coordinates are decoded sequentially to leverage the inter-coordinate dependency. Furthermore, we propose an auxil-iary visual-alignment loss to enforce the logical representa-tion of the non-empty cells to contain more local visual de-tails, which helps produce better cell bounding boxes. Ex-tensive experiments demonstrate that our proposed method can achieve state-of-the-art results in both logical and phys-ical structure recognition. The ablation study also vali-dates that the proposed coordinate sequence decoder and the visual-alignment loss are the keys to the success of our method.
1. Introduction Tables are an essential medium for expressing structural or semi-structural information. Table structure recognition, including recognizing a table’s logical and physical struc-ture, is crucial for understanding and further editing a vi-*Equal contribution. (a) TableFormer (Baseline) (b) V AST (Ours) Figure 1. Visualization comparison of the bounding box predicted by TableFormer and V AST. Our results are more accurate, which is vital for downstream content extraction or table understanding tasks. The image is cropped from the table with id 7285, which comes from FinTabNet. sual table. The logical structure represents the row-column relation of cells and the spanning information of a cell. The physical structure contains not only the logical structure but also the bounding box or content of the cells, focusing on the exact locations in the image. Table recognition can be implemented by an end-to-end encoder-decoder paradigm. Such methods excel at predict-ing the logical structure but usually produce less accurate physical structures, i.e., bounding boxes of cells or cell con-tents. However, the bounding box accuracy is essential to downstream tasks, such as text information extraction or ta-ble QA. This work designs the sequential coordinate decod-ing and enforces more visual information to produce more accurate bounding boxes. In the coordinate sequence decoder, the start embedding of the non-empty cell is the representation from the HTML sequence decoder. The representation usually contains a more global context of the table and has fewer local visual details. Because the local visual appearance is vital for pre-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11134 dicting accurate coordinates, we align the representation of non-empty cells from the HTML sequence decoder with the visual features from the CNN image encoder. In particular, a visual-alignment loss is designed to maximize the cosine similarity of the paired visual-HTML representation in the image. In summary, our contributions are threefold. • We propose a coordinate sequence decoder to signifi-cantly improve the table’s physical structure accuracy upon an end-to-end table recognition system. • We introduce a visual-alignment loss between the HTML decoder and coordinate sequence decoder. It enforces the representation from the HTML decod-ing module contains more detailed visual information, which can produce better bounding boxes for the non-empty cells. • We develop an end-to-end sequential modeling frame-work for table structure recognition, the comparison experiments prove that our method can achieve state-of-the-art performance and the ablation experiments show the effectiveness of our method.
Gu_MSINet_Twins_Contrastive_Search_of_Multi-Scale_Interaction_for_Object_ReID_CVPR_2023
Abstract Neural Architecture Search (NAS) has been increasingly appealing to the society of object Re-Identification (ReID), for that task-specific architectures significantly improve the retrieval performance. Previous works explore new opti-mizing targets and search spaces for NAS ReID, yet theyneglect the difference of training schemes between imageclassification and ReID. In this work, we propose a novel Twins Contrastive Mechanism (TCM) to provide more ap-propriate supervision for ReID architecture search. TCM reduces the category overlaps between the training and val-idation data, and assists NAS in simulating real-world ReID training schemes. We then design a Multi-Scale Interac-tion (MSI) search space to search for rational interaction operations between multi-scale features. In addition, weintroduce a Spatial Alignment Module (SAM) to further enhance the attention consistency confronted with images from different sources. Under the proposed NAS scheme, a specific architecture is automatically searched, namedas MSINet. Extensive experiments demonstrate that ourmethod surpasses state-of-the-art ReID methods on both in-domain and cross-domain scenarios. Source code available inhttps://github.com/vimar-gu/MSINet .
1. Introduction Object re-identification (Re-ID) aims at retrieving spe-cific object instances across different views [ 39,40,57, 65,70], which attracts much attention in computer vi-sion community due to its wide-range applications. Pre-vious works have achieved great progresses on both super-vised [ 42,49,58] and unsupervised ReID tasks [ 17,50,78], most of which adopts backbone models originally designedfor general image classification tasks [ 20,52]. Recent literature [ 64,75] has shown that applying differ-*Co-corresponding authors hood bumper decorationsluggage carrier side Figure 1. The left panel shows the example activation maps of ResNet50 (1st row) and MSINet (2nd row). The right panel shows the average distances between the most similar 10 negative sam-ples and each query image at the inference. Best viewed in color. ent architectures on ReID leads to large performance vari-ations. Some works employ Neural Architecture Search (NAS) for ReID [ 28,45]. The proposed optimizing targets and search spaces stably improve the model performance, yet the main search scheme still follows traditional NASmethods designed for general classification tasks [ 12,36]. As an open-set task, ReID contains different categories inthe training and validation sets [ 64,71], while the two sets share exactly the same categories in standard classifica-tion tasks [ 10], which is also followed by traditional NAS methods. The incompatibility between search schemes andreal-world training schemes makes the searched architec-ture sub-optimal for ReID. Moreover, ReID is required todistinguish more subtle distinctions among fine-grained in-stances compared with image-level classification [ 48,63]. Some previous works [ 4,44,68,75] have manifested that lo-cal perspectives and multi-scale features are discriminativefor ReID. However, current utilizations of these features aremostly empirically designed, which can be more flexible according to the characteristics of different network layers. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19243 In this work, we propose a novel NAS scheme aiming at addressing the aforementioned challenges. In order to sim-ulate the real-world ReID training schemes, a Twins Con-trastive Mechanism (TCM) is proposed to unbind the cat-egories of the training and validation sets. An adjustable overlap ratio of categories builds up the compatibility be-tween NAS and ReID, which provides more appropriate su-pervision for ReID architecture search. Moreover, to searchfor more rational utilizations of multi-scale features, we de-sign a Multi-Scale Interaction (MSI) search space. The MSIspace focuses on interaction operations between multi-scalefeatures along the shallow and deep layers of the network, which guides the features to promote each other. Addi-tionally, to further improve the generalization capability, wepropose a Spatial Alignment Module (SAM) to enhance the attention consistency of the model confronted with images from different sources. With the above NAS scheme, we obtain a light-weight yet effective model architecture, de-noted as Multi-Scale Interaction Net (MSINet). We visualize the example activation maps of our pro-posed MSINet and ResNet50 [ 20] trained on V eRi-776 [ 38, 39] in Fig. 1. Compared to ResNet50, MSINet focuses on more unique distinctions with specific semantic informa-tion to recognize instances. Besides, MSINet largely in-creases the distance margin between query image and cor-responding negative samples, reflecting extraordinary dis-criminative capability. Extensive experiments demonstrate that MSINet surpasses state-of-the-art (SOTA) ReID meth-ods on both in-domain and cross-domain scenarios. Oursource codes are available in the supplementary material. Our contributions are summarized as follows: • To the best of our knowledge, we are the first to build the NAS search scheme according to the real-world ReID training schemes, which provides more appro-priate supervision for the ReID architecture search. • We propose a novel search space based on the Multi-Scale Interaction (MSI) operations and a Spatial Alignment Module (SAM) to improve the model per-formance on in-domain and cross-domain scenarios. • We construct a light-weight yet effective architec-ture for ReID tasks, denoted as MSINet. With only2.3M parameters, MSINet surpasses ResNet50 [ 20] by 9% mAP on MSMT17 [ 60] and 16% mAP on MSMT17 →Market-1501 [ 69].
Jiang_HumanGen_Generating_Human_Radiance_Fields_With_Explicit_Priors_CVPR_2023
Abstract Recent years have witnessed the tremendous progress of 3D GANs for generating view-consistent radiance fields with photo-realism. Yet, high-quality generation of hu-man radiance fields remains challenging, partially due to the limited human-related priors adopted in existing meth-ods. We present HumanGen, a novel 3D human generation scheme with detailed geometry and 360◦realistic free-view rendering. It explicitly marries the 3D human generation with various priors from the 2D generator and 3D recon-structor of humans through the design of “anchor image”. We introduce a hybrid feature representation using the an-chor image to bridge the latent space of HumanGen with the existing 2D generator. We then adopt a pronged de-sign to disentangle the generation of geometry and appear-ance. With the aid of the anchor image, we adapt a 3D re-constructor for fine-grained details synthesis and propose a two-stage blending scheme to boost appearance genera-tion. Extensive experiments demonstrate our effectiveness for state-of-the-art 3D human generation regarding geome-try details, texture quality, and free-view performance. No-tably, HumanGen can also incorporate various off-the-shelf 2D latent editing methods, seamlessly lifting them into 3D.
1. Introduction We are entering an era where the boundaries of real and virtually generated worlds are dismissing. An epitome of this revolution is the recent rise of 3D-aware and photo-realistic image synthesis in the past several years [5, 6, 11, 16,53,63,91], which combine 2D Generative Adversar-ial Networks (GANs) with neural volume rendering, like neural radiance fields (NeRFs) [43]. But such 3D GANs mainly focus on rigid contents like human/animal faces or CAD models. The further 3D generation of us humans with photo-realism is more attractive, with numerous applica-tions in VR/AR or visual effects. High-quality 3D human generative models should ide-ally generate 3D-aware humans with the following charac-teristics: (1) detailed geometry, (2) photo-realistic appear-z 2D Generator 2D Editing Reconstruction Explicit Human Priors Figure 1. The proposed HumanGen can generate 3D humans with fine-detailed geometry and appearance while seamlessly lifting various 2D latent editing tools into 3D. ance, and (3) even supporting 360◦free-view rendering. Yet, it remains extremely challenging, mainly due to the significantly higher diversity of human apparel and skele-tal pose. Only very recently, a few work explore auto-decoding [10] and 3D GANs [3, 22,85] for human gener-ation by using the parametric human model like SMPL [39] as priors. But such parametric human prior lacks sufficient geometry details, and the adopted neural rendering in these methods does not guarantee that meaningful 3D geometry can be generated, further leading to appearance artifacts. Besides, these 3D human generators are trained with lim-ited human datasets that lack diversity [68] or suffer from imbalanced viewing angles (most are front views) [13, 38]. In a nutshell, existing methods fail to fulfill all the afore-mentioned three characteristics for 3D human generation. We observe that 3D human generation can benefit from more explicit priors from other research domains of human modeling, except for the SMPL prior adopted in existing methods. Specifically, with the recent large-scale dataset SHHQ [13], the 2D human generators [29–31] achieve more decent synthesis results than the 3D ones. And var-ious downstream 2D editing tools are available by disentan-gling the latent spaces [55, 58,65,78]. These abilities of 2D generation and subsequent can significantly benefit the 3D human generation if their latent spaces can be bridged. Be-sides, recent advances in monocular 3D human reconstruc-tion [2, 60] have achieved more fine-grained geometry de-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12543 tails than the implicit geometry proxy in current 3D human generators. Yet, there lacks a well-designed mechanism to explicitly utilize the rich human priors from both 2D gener-ator and 2D reconstructor for 3D human generation. In this paper, we present HumanGen – a novel neural scheme to generate high-quality radiance fields for 3D hu-mans from 2D images, as shown in Fig. 1. In stark contrast with existing methods that only use SMPL, our approach explicitly utilizes richer priors from the top-tier 2D gen-eration and 3D reconstruction schemes. As a result, our approach not only enables more realistic human genera-tion with detailed geometry and 360◦free-view ability, but also maintains the compatibility to existing off-the-shelf 2D editing toolbox based on latent disentanglement. Our key idea in HumanGen is to organically leverage a 2D human generator and a 3D human reconstructor as explicit priors into a 3D GAN-like framework. Specifi-cally, we first introduce a hybrid feature representation of the generative 3D space, which consists of the tri-plane fea-tures from EG3D [5] as well as a 2D photo-real human im-age (denoted as “anchor image”) generated through the pre-trained 2D generator. Note that we adopt separated Style-GAN2 [31] architectures to generate both the tri-plane fea-ture maps and the anchor image. But they share the same latent mapping network, so as to bridge and anchor the la-tent space of our 3D GAN to the pre-trained 2D human generator. Then, based on such hybrid representation, we design our 3D human generator into the pronged geometry and appearance branches. In the geometry branch, we ex-plicitly utilize a pre-trained 3D reconstructor PIFuHD [60] to extract pixel-aligned features from the anchor image and provide extra fine-grained geometry supervision for our Hu-manGen. Note that the original PIFuHD encodes geometry as an implicit occupancy field. Thus, we propose a geome-try adapting scheme to turn it into a generative version with signed distance field (SDF) output, so as to support efficient and high-resolution volume rendering with sphere tracing. For the appearance branch, we propose to learn an appear-ance field and a blending field from both the pixel-aligned and tri-plane features. Note that [18, 59] only use the pixel-aligned feature, thus we include the tri-plane features which “sculpt” richer feature space for learning sharper texture. Then, we adopt a two-stage blending scheme to fully use the rich texture information in the anchor image. For our GAN training procedure, we adopt similar training strategy like EG3D [5] and introduce additional front and back consis-tency supervision to enhance the generated texture details. Besides, we observe that existing 2D human generator StyleGAN2 [31] trained on the large-scale SHHQ [13] can potentially generate diverse human images including side-views and even back-views. Thus, we train our HumanGan using an augmented dataset from SHHQ by using the pre-trained 2D generator to cover 360◦viewing angles. Oncetrained, our HumanGen enables high-quality 3D human generation. As an additional benefit, it shares the same la-tent mapping with the 2D generated anchor image. Thus, using the anchor image, we can seamlessly upgrade off-the-shelf 2D latent editing methods into our 3D setting. We showcase various 3D effects via convenient anchor image editing. To summarize, our main contributions include: • We present a novel 3D-aware human generation scheme, with detailed geometry and 360◦more realis-tic free-view rendering than previous methods, achiev-ing significant superiority to state-of-the-arts. • We propose a hybrid feature representation using an anchor image with shared latent space to bridge our 3D GAN with the existing 2D generator. • We propose a pronged design for appearance/geometry branches, and adapt a 3D reconstructor to aid the ge-ometry branch for fine-grained details synthesis. • We introduce an implicit blending field with two-stage blending strategy to generate high-quality appearance.
Gao_Adaptive_Zone-Aware_Hierarchical_Planner_for_Vision-Language_Navigation_CVPR_2023
Abstract The task of Vision-Language Navigation (VLN) is for an embodied agent to reach the global goal according to the instruction. Essentially, during navigation, a series of sub-goals need to be adaptively set and achieved, which is nat-urally a hierarchical navigation process. However, previ-ous methods leverage a single-step planning scheme, i.e., directly performing navigation action at each step, which is unsuitable for such a hierarchical navigation process. In this paper, we propose an Adaptive Zone-aware Hierar-chical Planner (AZHP) to explicitly divides the navigation process into two heterogeneous phases, i.e., sub-goal set-ting via zone partition/selection (high-level action) and sub-goal executing (low-level action), for hierarchical planning. Specifically, AZHP asynchronously performs two levels of action via the designed State-Switcher Module (SSM). For high-level action, we devise a Scene-aware adaptive Zone Partition (SZP) method to adaptively divide the whole nav-igation area into different zones on-the-fly. Then the Goal-oriented Zone Selection (GZS) method is proposed to select a proper zone for the current sub-goal. For low-level ac-tion, the agent conducts navigation-decision multi-steps in the selected zone. Moreover, we design a Hierarchical RL (HRL) strategy and auxiliary losses with curriculum learn-ing to train the AZHP framework, which provides effective supervision signals for each stage. Extensive experiments demonstrate the superiority of our proposed method, which achieves state-of-the-art performance on three VLN bench-marks (REVERIE, SOON, R2R).
1. Introduction In recent years, Embodied-AI (E-AI) research has at-tracted a surge of interest within the computer vision, nat-ural language processing and robotics communities since its interdisciplinary nature. The long-term goal of E-AI re-search is to build intelligent agents that can interact with humans to complete assigned tasks. In this paper, we fo-*Corresponding author Instruction(globalgoal):Bringmethewhitepillow (a)(b)Single-stepActionHigh-levelActionLow-levelActionHigh-levelActionLow-levelAction①Sub-goal ②Sub-goal Global-goal ActionSpace (c)Figure 1. Given a goal-oriented/semantic-level instruction, (a) pre-vious methods essentially adopt a singel-step navigation paradigm, i.e., directly taking an action from the action space according to the global goal at each step; (b)(c) we instead propose a hierarchi-cal navigation paradigm, containing high-and low-level actions to adaptively set and achieve a series of sub-goals. cus on the Vision-Language Navigation (VLN) task, one of the most fundamental E-AI topics, where embodied agents need to navigate in a photorealistic 3D environment (gener-ally unseen) according to the natural language instruction. The instruction given to the agent is mainly two types, i.e., step-by-step instruction ( e.g. R2R) and goal-oriented instruction ( e.g. REVERIE and SOON). The latter is more practical for the home assistant robot since people usually do not provide fine-grained commands, but also more chal-lenging. Firstly, the goal-oriented instruction contains po-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14911 tential hierarchical information . As shown in Figure 1, the global goal is “bring me the white pillow”, which indicates the agent needs to complete several potential sub-goals, e.g., leaving the current room, finding the bedroom, locating the white pillow. Thus it is essentially a hierarchical navigation process, where the high-level process is sub-goal setting and the low-level process is sub-goal executing . Secondly, sub-goal means reaching a sub-target in a sub-region, which requires the agent to divide the scene into several zones and choose the proper zone for the current sub-goal. Impor-tantly, the sub-goal depends not only on the instruction, and an appropriate sub-goal needs to be set based on the agent’s current state, which means the agent needs to conduct zone partition and selection adaptively during navigation. For ex-ample, in Figure 1(b), when the agent is in the living room, the sub-goal is set as “finding the exit in the exit area (red zone)”. Thirdly, it is non-trivial to learn such a hierarchical navigation policy, especially given that there are no expert demonstrations for teaching the high-level process. However, the dominant paradigm of current state-of-the-art VLN methods is essentially a singel-step planning paradigm as shown in Figure 1(a). It directly takes one step of navigation action at each time, according to the action space and the global goal. Such a paradigm does not ex-plicitly model the hierarchical planning nature of the VLN task, largely limiting the long-horizon decision ability. To address the issues, we propose an Adaptive Zone-aware Hierarchical Planner (AZHP) based on our main idea, i.e., building a novel hierarchical planning framework for the VLN task. Firstly, AZHP models the navigation process as a hierarchical action-making process containing high-level and low-level actions. During navigation, the high-level action aims to set sub-goals, and the low-level action aims to complete the sub-goals accordingly. Specifically, the high-level action divides the whole scene into differ-ent zones and selects a proper zone for navigation based on the current state, e.g., the green zone (hallway) in Fig-ure 1(c). Then the low-level action is applied to execute specific navigation decision multi-steps in the selected zone until reaching the sub-target. Secondly, for the high-level action, we propose a Scene-aware adaptive Zone Partition (SZP) method to adaptively divide the global action map into several zones on-the-fly, according to the position and observations of each viewpoint. Note that the action map is a maintained topological map that records the historical tra-jectory and observations. Also, we design a Goal-oriented Zone Selection (GZS) method to select a specific zone ac-cording to the instruction and zone attributes. Besides, a State-Switcher Module (SSM) is placed to decide whether the current sub-goal is achieved and switch to the next sub-goal, supporting the asynchronous scheme. Thirdly, since there is no direct supervision signal for high-level action training, we propose a Hierarchical Reinforcement Learn-ing (HRL) strategy to provide cooperative rewards. Besides, we design auxiliary losses with a curriculum learning strat-egy to improve the learning robustness further. In summary, we make the following contributions. (i) We propose AZHP, which conducts a hierarchical naviga-tion paradigm via setting two-level actions, to solve the long-horizon planning VLN task. To the best of our knowl-edge, AZHP is the pioneering work investigating hierar-chical planning strategy for the VLN task. (ii) We devise SZP and GZS for high-level action, where SZP adaptively divides scenes into several zones on-the-fly, and GZS se-lects the corresponding zone for a specific sub-goal. Also, SSM is designed to support asynchronous switching be-tween high/low-level actions. (iii) To construct and learn the hierarchical planning policy, we design an HRL strat-egy and auxiliary losses with a curriculum learning man-ner. Superior performance on three datasets demonstrates the method’s effectiveness. Code is available at: https: //github.com/chengaopro/AZHP .
Go_Towards_Practical_Plug-and-Play_Diffusion_Models_CVPR_2023
Abstract Diffusion-based generative models have achieved re-markable success in image generation. Their guidance for-mulation allows an external model to plug-and-play con-trol the generation process for various tasks without fine-tuning the diffusion model. However, the direct use of pub-licly available off-the-shelf models for guidance fails due to their poor performance on noisy inputs. For that, the existing practice is to fine-tune the guidance models with labeled data corrupted with noises. In this paper, we ar-gue that this practice has limitations in two aspects: (1) performing on inputs with extremely various noises is too hard for a single guidance model; (2) collecting labeled datasets hinders scaling up for various tasks. To tackle the limitations, we propose a novel strategy that lever-ages multiple experts where each expert is specialized in a particular noise range and guides the reverse process of the diffusion at its corresponding timesteps. However, as it is infeasible to manage multiple networks and uti-lize labeled data, we present a practical guidance frame-work termed Practical Plug-And-Play (PPAP ), which lever-ages parameter-efficient fine-tuning and data-free knowl-edge transfer. We exhaustively conduct ImageNet class con-ditional generation experiments to show that our method can successfully guide diffusion with small trainable pa-rameters and no labeled data. Finally, we show that im-age classifiers, depth estimators, and semantic segmenta-tion models can guide publicly available GLIDE through our framework in a plug-and-play manner. Our code is available at https://github.com/riiid/PPAP .
1. Introduction Recently, diffusion-based generative models [49] have shown great success in various domains, including image generation [14, 44, 45], text-to-speech [21, 40], and text *Co-first autor. †Corresponding author. Diffusion…palacehammer[coralreef]street signArmadillo…Segmentation MapImage ClassDepth Map Depth EstimatorPPAPImage ClassifierPPAPSemantic SegmentationPPAPFigure 1. Overview of our framework. Practical Plug-And-Play (PPAP) enables the diffusion model to be guided by lever-aging off-the-shelf models. Images shown below are generated by guiding the unconditional GLIDE [37] with DeepLabV3 [4], ResNet50 [15], and MiDaS [43] in a plug-and-play manner. generation [32]. Specifically, for image generation, recent works have shown that diffusion models are capable of gen-erating high-quality images comparable to those generated by GANs [8,12], while not suffering from mode collapse or training instabilities [38]. In addition to these advantages, their formulation allows the external model guidance [8, 49, 53], which guides the generation process of diffusion models towards the desired condition. Since guided diffusion leverages external guid-ance models and does not require further fine-tuning of the diffusion model, it holds the potential for cheap and con-trollable generation in a plug-and-play manner. For exam-ple, previous approaches use an image classifier for class-conditional image generation [8, 53], a fashion understand-ing model for fashion image editing [28], and a vision-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1962 language model for text-based image generation [1, 37]. From these, if the publicly available off-the-shelf model can be used for guidance, one can easily apply one diffusion to various generation tasks. For this purpose, an existing practice is to fine-tune the external off-the-shelf model on a noisy version of the train-ing dataset [8, 12], to adapt the model on the noisy latent images encountered during the diffusion process. However, we argue that such a practice has two challenges for plug-and-play generation: (1) A single guidance model is insuf-ficient to make predictions on inputs corrupted with varying degrees of noise, namely a too difficult task; and (2) It re-quires a labeled training dataset, which becomes a major hurdle whenever leveraging the off-the-shelf model. In this paper, we first investigate the behaviors of classi-fiers by varying degrees of noise to understand the first chal-lenge. On one hand, guidance models trained on corrupted images with heavy noise categorize images based on coarse structures. As a result, such a model would guide the dif-fusion model to generate essential skeletal features. Mean-while, guidance models trained on cleaner images capture finer details in the images, guiding the diffusion model to work on finishing touches. Based on these key observations, we propose a novel multi-experts strategy that uses multiple guidance models, each fine-tuned to specialize in a specific noise region. Despite the effectiveness of the multi-experts strategy, it should manage multiple networks and utilize the labeled data whenever applying new off-the-shelf models for var-ious generation tasks. For more practical plug-and-play guidance of the diffu-sion model with multi-experts strategy, we introduce the framework called Practical Plug-And-Play (PPAP). First, to prevent the size of guidance models from growing pro-hibitively large due to the multi-experts strategy, we lever-age a parameter-efficient fine-tuning scheme that can adapt off-the-shelf models to noisy images while preserving the number of parameters. Second, we transfer the knowledge of the off-the-shelf model on clean diffusion-generated data to the expert guidance models, thereby circumventing the need for collecting labeled datasets. Our empirical results validate that our method signifi-cantly improves performance on conditional image genera-tion with off-the-shelf models with only small trainable pa-rameters and no labeled data. We also showcase various applications with the publicly available diffusion model, GLIDE [37], by leveraging off-the-shelf image classifiers, depth estimators, and semantic segmentation models in a plug-and-play manner.
Cho_PartDistillation_Learning_Parts_From_Instance_Segmentation_CVPR_2023
Abstract We present a scalable framework to learn part segmen-tation from object instance labels. State-of-the-art instance segmentation models contain a surprising amount of part information. However, much of this information is hidden from plain view. For each object instance, the part in-formation is noisy, inconsistent, and incomplete. PartDis-tillation transfers the part information of an instance seg-mentation model into a part segmentation model through self-supervised self-training on a large dataset. The result-ing segmentation model is robust, accurate, and generalizes well. We evaluate the model on various part segmentation datasets. Our model outperforms supervised part segmen-tation in zero-shot generalization performance by a large margin. Our model outperforms when finetuned on tar-get datasets compared to supervised counterpart and other baselines especially in few-shot regime. Finally, our model provides a wider coverage of rare parts when evaluated over 10K object classes. Code is at https://github. com/facebookresearch/PartDistillation . *This work was done during Jang Hyun Cho’s internship at Meta AI.1. Introduction The world of object parts is rich, diverse, and plenti-ful. Yet, even the most successful part segmentation bench-marks [10, 22] focus on only the few most prominent im-age classes, and are orders of magnitude smaller than corre-sponding object instance segmentation benchmarks [21,31]. Parts are harder to detect, annotate, and properly define. In this paper, we show that instance segmentation models, and indirectly much larger instance segmentation datasets, provide plentiful supervision for part segmenta-tion. Specifically, we show that the penultimate layer of a pre-trained instance segmentation model readily groups parts across a wide class of instances. We distill this part in-formation from an instance segmentation model into a ded-icated part segmentation framework, in a two stage process we call PartDistillation . In the first stage, our model learns to segment all possible parts in a class-agnostic fashion. We bootstrap an iterative self-training process from clus-tered embeddings of an instance segmentation model. The self-supervised nature of this process allows us to scale part discovery to 10Kobject classes in 10Mimages without any part-level supervision. In the second stage, our method This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7152 learns to group the discovered parts of each object category independently into object-specific part clusters. Figure 1 shows the result of this two-stage process. Unlike traditional self-training methods [39, 43, 47] that rely on supervised part labels, we distill the part informa-tion from a pre-trained instance segmentation model. In this framework, self-training increases the consistency be-tween the different potential part segmentation, and boosts the noisy supervisory signal. Our model makes full use of powerful instance segmentation architectures [11,12,25] for both supervision and part segmentation itself. We show that PartDistillation outperforms existing un-supervised methods by a large margin. It is very label-efficient in few-shot training, even compared to super-vised models trained on existing labelled part segmentation dataset. Finally, we verify that the part discovery qual-ity is consistent beyond a narrow set of classes in exist-ing datasets. We go through manual evaluation process and show that 1) PartDistillation discover more consistent parts compared to supervised model and 2) the precision stays the same when scaled to 10K classes. 2. Related Work Self-supervised learning aims to learn a general feature representation for many downstream vision tasks by solv-ing a proxy task such as instance discrimination [8,9,24,42] and image reconstruction [1, 23]. The learned representa-tion is then finetuned either on the same dataset with few labels or on different datasets and tasks. Other methods directly solve a task without labels such as k-NN classifi-cation [5, 42], image retrieval [3, 4], and image segmenta-tion [5, 14, 29, 40]. PartDistillation directly solves part seg-mentation; we show strong zero-shot performance on un-known datasets and highly label-efficient when fine-tuned. Unsupervised part segmentation. Some prior works tack-les part segmentation in purely unsupervised setting [15,28, 33]. They use a discriminative model to minimize pixel-level contrastive loss and an equivariance loss across views to assign unique labels on different part regions. These models work best if training images contain a single object category centered in an image, and thus do not scale grace-fully. In contrast, PartDistillation learns part segmentation from instance segmentation. It uses object-level masks and region-level representation similar to [2, 11, 25]. In consid-ers features exclusively within a detected instance, enabling the model to learn directly from crowded and scene-centric in-the-wild images. Self-training boosts the performance of a pre-trained model on large-scale unlabelled data. Self-training starts with an initial model trained either with a small portion of labelled data or from self-supervision. It then train another model that predicts the same output as the initial model from a strongly augmented input. This may significantlyimprove the robustness and performance of the resulting model [20,39,43,47]. PartDistillation can be best described as a self-training method. One notable difference is that we supplement the initial annotated labels with generically mined localization derived from pixel-level feature group-ing within each object mask. We show that features from a model trained to solve object instance segmentation has surprisingly accurate part-level information which a simple grouping algorithm is able to extract. Query-based detection and segmentation. Detection Transformer (DETR) [2] rephrases the problem of object detection as a query-based cross-attention mechanism. A set of queries is transformed into object-level representa-tion as a single vector by attending to the feature map of a given image through transformer decoders. PartDistillation adapt this framework [2, 11, 12, 46] and learn to represent part with a set of queries. This allows to decouple localiza-tion and classification over two different stages of training. 3. Preliminaries Self-training considers a small set of images and their la-bels, and a large set of unlabeled images. It starts from a teacher model pre-trained on the available labeled data. The initial model uses a supervised training objective. Self-training then fits a separate student model to the combined supervised data and a corpus of unsupervised data. On the supervised data, it uses the same supervised loss. On the unsupervised data, it uses the signal from the teacher, while heavily augmenting the students inputs. During train-ing, the teacher is periodically updated from a snapshot of the student model. Without such an update self-training closely resembles model distillation [13, 27]. Variants of self-training [20, 41, 47] pre-train on a different task or use self-supervision. Self-training leads to more discriminative and robust features for the final system [3, 4, 20, 39, 47]. Query-based segmentation. Mask2Former is one of the recent methods that introduced the idea of query-based representation of (object instance) segments in an im-age [11, 12]. Mask2Former starts by encoding an input image Iinto an intermediate feature representation Fus-ing an encoder network E:I→F. From this feature representation, Mask2Former transforms a fixed set of ob-ject queries (learned parameter vectors) qo 1, . . . , qo Nointo object instance masks Mo 1, . . . , Mo Nowith corresponding objectness scores so 1, . . . , so Nothrough a decoder network Do:qo i, F→Mo i, so i, fo i. In addition, each output is asso-ciated with a feature vector fo i, an abstract representation of the object instance. Mask2Former uses this feature vector to classify objects co i∈ Cinto pre-defined object categories Cthrough a classification head. Our PartDistillation makes full use of the query-based Mask2Former. However, instead of producing object instance queries, we produce queries for each potential object part in an image. In the next sec-7153 (a)First stage: Part-proposal learning (b)Second stage: Part ranking Figure 2. Overview of PartDistillation. Left: In the first stage, a transformer encoder produces instance segmentation feature which we group into class-agnostic part segments, part proposals , as described in Sec. 4.2. We then train a separate transformer decoder bootstrapped from these part segments and improved through self-training. Right : In the second stage, we assign part labels for all part-regions in a class by clustering across dataset and ranking by the density estimates of the clusters. We call this process class-specific part ranking . Figure 3. Self-training not only improves localization but also discovers new parts. Left: clustered part regions. Right : Final PartDistillation prediction after self-training. tion, we show how to use a variant of self-training, called PartDistillation, to train a query-based segmentation model for object parts without using any part annotations. 4. PartDistillation Our PartDistillation architecture extends a standard in-stance segmentation model [11]. We learn an additional query-based part proposal decoder, and an object-class-specific ranking function for each part proposal. Sec. 4.1 presents the exact architecture used for part segmentation. Sec. 4.2 highlights the training objective of the part proposal mechanism, while Sec. 4.3 shows the training of the object-class-specific ranking. Both part proposal and object-class-specific ranking are learned from instance labels alone, and do not use any dedicated part labels. We base all our exper-iments on a Mask2Former model trained using the open-vocabulary Detic [45] model. See Fig. 2 for an overview.4.1. A transformer-based part segmentation model The basic PartDistillation architecture closely follows a Mask2Former [11] object instance segmentation model. We start from a pre-trained instance segmentation model with a fixed encoder E, and object instance decoder Do. We use both as is and do not further fine-tune or modify them. Instead, we learn a separate part decoder Dp:qp i, F→ Mp i, sp i, fp ifor a set of generic part queries qp 1, . . . , qp Np. For each part query, we produce a part mask Mp i, a score sp i, and feature representation fp i. Here, the score sp ihighlights how likely an output mask corresponds to a valid part. At this stage, parts are not associated to individual instances, or object classes. Instead, they are shared among all in-stances and classes in an image. This helps keep the number of potential part queries low, and allows parts to generalize among different object classes. In a second stage, we assign each part proposal to their closest object instance, and rescore the part in the context of the objects category. For each part query qp i, we measure the overlap (Intersection over Union) Obetween the part mask Mp iand all objects masks Mo jand assign the
part to the highest overlapping object ai: ai= arg max jO(Mo j, Mp i). (1) Here we use open-vocabulary Detic model to cover large number of object classes. This association provides us with not just an object query, but also its object instance feature fo ai. We rerank each part proposal using a scoring function r(fp i|fo ai). We use an object’s class as the primary signal to rank part segmentations. Parts that often appear in spe-cific object classes are likely part of an object. Parts that rarely appear in specific object classes may simply be out-liers. The final part score ˆsp i=r(fp i|fo ai)relies fully on the reranked model. The final part segmentation model closely resembles two-stage object detection and instance segmentation net-7154 works. The first stage produces class-agnostic object pro-posals. A second stage then scores these proposals. The main difference between our setup and two-stage detectors is the training pipeline. In the next two sections, we show how to learn both the part decoder Dpand reranking func-tionr(fp i|fo j)from just object instance level annotations. 4.2. Learning a part decoder from instance segmen-tation We exploit two different signals to train a part segmenta-tion model from a pre-trained instance segmentation: First, within each detected instance, the pixel-level feature rep-resentation foof an instance segmentation model naturally groups pixels of similar parts together. Second, across a dataset, various parts reoccur, shared between different ob-jects and instances. PartDistillation starts by clustering pixel-level features of the penultimate layer of the Mask2Former architecture. Given an object instance mask Mo i, we group pixel-level features fo iwithin that mask by K-Means clustering [36] and obtain class-agnostic part segments ˆMp 1,ˆMp 2, . . . , ˆMp k for each object instance. We refer to these segments as part proposals . For each object instance, these part proposals follow the inferred instance mask ˆMp j⊆Mo i, and the em-bedding distance of the mid-level representation foof the Mask2Former. The emergence of structured mid-level rep-resentations is common among deep networks [44], and as such provides good part-level supervision. However, the resulting grouping is both inconsistent between object in-stances, and noisy within each instance. We infer a consistent part segmentation by training a class-agnostic query-based part decoder [11] Dpon all part proposals. More precisely, we train a single-class in-stance segmentation model and treat all mined part propos-als as ground truth masks. We train the model with a bi-nary classification loss and mask loss similar to the origi-nal Mask2Former. However, the initial part proposals from pixel-level feature clustering exhibit significant localization errors as visualized in Fig. 3. Self-training. We reduce this noise through self-training, and obtain high quality part proposals . The output of the model is a set of part proposals for each image and the model’s confidence scores for the proposals. Addition-ally, we also obtain decoded query vectors for the propos-als, which serve as our part-level representation . We filter out part proposals that do not overlap with the object in-stance mask Moin each image or have low score sp. Self-training reinforces positive part proposals, and suppresses the score spof negative proposals. The results are clean object-agnostic part proposals, as shown in Fig. 3. In the next section, we show how to assign these proposals to in-dividual object classes, to obtain a list of likely object parts.Dataset Name # Images # Object Classes # Part Classes (Avg. #) PartImageNet-train 16,540 109 40 ( ∼4) PartImageNet-val,test 7,555 49 40 ( ∼4) Pascal Part-train 4,638 20 50 ( ∼8) Pascal Part-val 4,758 20 50 ( ∼8) Cityscapes Part-train 2,975 5 9 ( ∼4) Cityscapes Part-val 500 5 9 ( ∼4) ImageNet-21K ∼15M ∼21K n/a Table 1. Summary of all datasets used in this work. 4.3. Learning class-specific part ranking Our aim is to produce a score r(fp i|fo j)of how likely a query qp iis part of an object qo j. We learn this score as a density estimate rk(fp i|fo j) =exp(−∥Dp(fp i, fo j)−µj k∥2) PNj l=1exp(−∥Dp(fp i, fo j)−µj l∥2)(2) where the above softmax considers all parts lassigned to an object j. We use an objects class coas the main supervi-sory signal for the above density estimate. Parts that often co-occur with a specific object class in our training set are scored higher, parts that rarely appear in an object category are weighted down. We found a simple weighted k-means-based initial density estimate to be sufficient [38]. During training, we match each pixel xof an object to its most confident arg max iMp i(x)sp ipart query. Any query with a score sp i>0.3that covers at least 5%of the area of the object is considered a candidate. More details about postprocessing part candidates in supplementary. For each class, we aggregate all candidate queries across the entire training dataset. Next, we use the k-means-based density estimator of Snell et al. [38] to initialize our scoring func-tion Eq. 2. This density estimate assigns common query features a high initial score, and rare ones a low score. The entire procedure again, does not use any part labels, but in-stead uses the co-occurrence of parts and object classes over an entire dataset as a supervisory signal. Final self training. Similar to part proposals, we again use self-training to boost the performance of the class-specific ranking function, with cluster IDs as class labels. We use the same postprocessing step (area and score thresholds) to refine the pseudo-labels before self-training. More details are in the supplementary. 5. Experiments Datasets. For quantitative evaluation, we use PartIma-geNet [22], Pascal Parts [10], and Cityscapes Parts [17, 37] datasets. Table 1 shows a summary of these datasets. PartImageNet is a 158 class subset of ImageNet-1K dataset [18] with 40part classes shared across all object categories. The test split of PartImageNet used for eval-uation has 49object categories. All 40part categories are 7155 NMI ARI sheep horse cow mbike plane bus car bike dog cat sheep horse cow mbike plane bus car bike dog cat DFF 12.2 14.4 12.7 19.1 16.4 13.5 9.0 17.8 14.8 18.0 21.6 32.3 23.3 37.2 38.3 28.5 24.1 39.1 32.3 37.5 SCOPS 26.5 29.4 28.8 35.4 35.1 35.7 33.6 28.9 30.1 33.7 46.3 55.7 51.2 59.2 68.0 66.0 67.1 52.4 52.2 46.6 K-means 34.5 33.3 33.0 38.9 42.8 37.5 38.4 35.2 40.4 44.2 58.3 66.8 59.0 63.1 76.8 66.4 70.6 63.2 70.2 71.9 Choudhury et al. 35.0 37.4 35.3 40.5 45.1 38.8 36.8 34.8 46.6 47.9 59.8 68.9 59.7 64.7 79.6 67.6 72.7 64.7 73.6 75.4 Choudhury et al.†55.2 42.8 60.3 42.5 49.4 45.1 41.1 39.8 51.2 55.4 77.4 62.8 81.8 61.5 70.9 74.1 66.4 54.2 86.2 88.9 PartDistillation 57.3 62.2 65.5 34.8 58.8 55.6 54.8 53.6 43.6 37.8 81.6 89.9 90.0 43.7 88.7 84.1 87.5 74.8 59.0 52.6 Table 2. We evaluate our single model predictions on all 10 individual models of DFF [16], SCOPS [28], and Choudhury et al. [15]. We follow the same evaluation protocol as [15] such as the number of parts, image resizing and cropping, etc. Note that our model has never seen Pascal Part images. Here†means our implementation with comparable model. Figure 4. Manual evaluation result comparing supervised method and PartDistillation. still present in the test split. Pascal Parts is another part segmentation dataset with 20object categories and 50over-all part categories. Each object class has 9part classes on average. Cityscapes Parts contains 5object classes that are either person orvehicles . Interestingly, there are a few com-mon object classes between all datasets. However, each dataset has different definitions of parts for those objects. While all our models are only trained on ImageNet-21K with 15Mimages, we also compare with baseline mod-els trained directly on the train split of above evaluation datasets. We evaluate all models on the val,test splits of the evaluation datasets. Baselines. Here we describe all the baselines. In addition to published methods for unsupervised part segmentation like Choudhury et al. [15], we also describe some simple variants of our approach like “one-stage self-training”. We also compare with fully supervised models in specific set-tings. All models are Mask2Former [11] with SwinL back-bone [34], initialized with weights trained on COCO In-stance Segmentation [32] unless otherwise specified. (1) Choudhury et al. segment parts by training a model for a single object class at a time. Hence, we train individual models, one class at a time for every dataset. We resort to only using the DeepLab [7]-like framework with SwinL backbone as suggested in their work. (2) One-stage self-training. We also extend the stan-dard one-stage unsupervised segmentation by clustering with self-training. In particular, we first use K-Means to cluster pixels belonging to all segmented object instances of each object category. We use the part clusters as initial supervisory signal to run two rounds of self-training. (3) Part-supervised models. We evaluate models trained with full supervision using a source dataset on a newtarget dataset. This allows us to compare their generaliza-tion ability with our fully unsupervised model. Implementation Details. Self-training in our model is done with a batch size of 256 over 4 nodes. We use a learning rate of 0.0001 except for fine-tuning experiments in Table 4. For part-proposal learning step, we trained our model for 50Kiterations, and 100Kiterations for the final self-training during part association. For training Choud-hury et al. [15], we closely followed their official imple-mentation. When training on PartImageNet/ImageNet, we chose the best hyper-parameter based on one randomly cho-sen object category and used it for all other categories. For part-region mining, we set k= 4and for association we set k= 8. We applied dense-CRF [30] for each mined parts offline. We provide more details are in supplementary. 5.1. Evaluation on annotated part datasets We first compare our method against baseline models on PartImageNet, PascalParts and Cityscapes Parts which have pre-annotated part masks for different object categories. Unsupervised methods (such as our method) associate seg-mented parts with arbitrary cluster labels. Unlike super-vised methods, there is no direct one-to-one correspondence between cluster labels and pre-annotated part labels. NMI and ARI. The metrics normalized mutual informa-tion (NMI) and adjusted Rand index (ARI) were introduced in Choudhury et al. [15] as a way to cope with the above issue. NMI and ARI measure quality with respect to the target annotated part mask labels. Mean Intersection over Union (mIoU). Another standard way to evaluate unsupervised methods is to first associate each generated cluster with one of the part labels in the dataset whose part masks have the highest mIoU overlap 7156 Figure 5. An example of 3 ×3 grid images shown to annotators for a part cluster generated by our method (object class: laptop). The discovered parts are highlighted in red. with the segments from the cluster. This allows us to di-rectly adopt mIoU to compare the cluster with the masks from the associated ground truth part. We provide more de-tails in supplementary. Average Recall. We also evaluate only the localization abil-ity of the model separately. This can be don
Geng_GAPartNet_Cross-Category_Domain-Generalizable_Object_Perception_and_Manipulation_via_Generalizable_and_CVPR_2023
Abstract For years, researchers have been devoted to general-izable object perception and manipulation, where cross-category generalizability is highly desired yet underex-plored. In this work, we propose to learn such cross-category skills via Generalizable and Actionable Parts (GAParts ). By identifying and defining 9 GAPart classes (lids, handles, etc.) in 27 object categories, we construct a large-scale part-centric interactive dataset, GAPartNet, where we provide rich, part-level annotations (semantics, poses) for 8,489 part instances on 1,166 objects. Based on GAPartNet, we investigate three cross-category tasks: part segmentation, part pose estimation, and part-based object manipulation. Given the significant domain gaps between seen and unseen object categories, we propose a robust 3D segmentation method from the perspective of domain gen-eralization by integrating adversarial learning techniques. Our method outperforms all existing methods by a large *Equal contribution with the order determined by rolling dice. †Corresponding author: [email protected], no matter on seen or unseen categories. Further-more, with part segmentation and pose estimation results, we leverage the GAPart pose definition to design part-based manipulation heuristics that can generalize well to unseen object categories in both the simulator and the real world.
1. Introduction Generalizable object perception and manipulation are at the core of building intelligent and multi-functional robots. Recent efforts on generalizing the vision have been devoted to category-level object perception that deals with perceiv-ing novel object instances from known object categories, including object detectors from RGB images [17, 21, 46], point clouds [5, 19], and category-level pose estimation works on rigid [4, 53] and articulated objects [27, 59]. On the front of generalizable manipulation, complex tasks that involve interacting with articulated objects have also been proposed in a category-level fashion, as in the recent chal-lenge on learning category-level manipulation skills [38]. Additionally, to boost robot perception and manipulation This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7081 with indoor objects, researchers have already proposed sev-eral datasets [37, 57, 61, 66, 68] with part segmentation and motion annotations, and have devoted work to part segmen-tation [37, 68] and articulation estimation [27]. However, these works all approach the object perception and manipulation problems in an intra-category manner, while humans can well perceive and interact with instances from unseen object categories based on prior knowledge of functional parts such as buttons, handles, lids, etc. In fact, parts from the same classes have fewer variations in their shapes and the ways that we manipulate them, compared to objects from the same categories. We thus argue that part classes are more elementary and fundamental compared to object categories, and generalizable visual perception and manipulation tasks should be conducted at part-level. Then, what defines a part class? Although there is no sin-gle answer, we propose to identify part classes that are gen-eralizable in both recognition and manipulation. After care-ful thoughts and expert designs, we propose the concept of Generalizable and Actionable Part (GAPart )classes. Parts from the same GAPart class share similar shapes which al-low generalizable visual recognition; parts from the same GAPart class also have aligned actionability and can be in-teracted with in a similar way, which ensures minimal hu-man effort when designing interaction guidance to achieve generalizable and robust manipulation policies. Along with the GAPart definition, we present GAPart-Net, a large-scale interactive part-centric dataset where we gather 1,166 articulated objects from the PartNet-Mobility dataset [61] and the AKB-48 dataset [32]. We put in great effort in identifying and annotating semantic labels to 8,489 GAPart instances. Moreover, we systematically align and annotate the GAPart poses, which we believe serve as the bridge between visual perception and manipulation. Our class-level GAPart pose definition highly couples the part poses with how we want to interact with the parts. We show that this is highly desirable – once the part poses are known, we can easily manipulate the parts using simple heuristics. Based on the proposed dataset, we further explore three cross-category tasks based on GAParts: part segmentation, part pose estimation, and part-based object manipulation, where we aim at recognizing and interacting with the parts from novel objects in both known categories and, moreover, unseen object categories. In this work, we propose to use learning-based methods to deal with perception tasks, after which, based on the GAPart definition, we devise simple heuristics to achieve cross-category object manipulation. However, different object categories may contain differ-ent kinds of GAParts and provide different contexts for the parts. Each object category thus forms a unique domain for perceiving and manipulating GAParts. Therefore, all three tasks demand domain-generalizable methods that can work on unseen object categories without seeing them duringtraining, which is very challenging for existing vision and robotic algorithms. We thus consult the generalization liter-ature [12,13,25] and propose to learn domain-invariant rep-resentation, which is often achieved by domain adversarial learning with a domain classifier. During training, the clas-sifier tries to distinguish the domains while the feature ex-tractor tries to fool the classifier, which encourages domain-invariant feature learning. However, it is highly non-trivial to adopt adversarial learning in our domain-invariant fea-ture learning, due to the following challenges. 1) Handling huge variations in part contexts across different domains. The context of a GAPart class can vary significantly across different object categories. For example, in training data, round handles usually sit on the top of lids for the Cof-feeMachine category, whereas for the test category Table, round handles often stand to the front face of the drawers. To robustly segment GAParts in objects from unseen cate-gories, we need the part features to be context-invariant. 2) Handling huge variations in part sizes. Parts from different GAPart classes may be in different sizes, e.g., a button is usually much smaller than a door. Given that the input is a point cloud, the variations in part sizes will result in huge variations in the number of points across different GAParts, which makes feature learning very challenging. 3) Han-dling the imbalanced part distribution and part-object rela-tions. Object parts in the real world distribute naturally un-evenly and a particular part class may appear with different frequencies throughout various object categories. For ex-ample, there can be more buttons than doors on a washing machine while the opposite is true in the case of a storage furniture. This imbalanced distribution also adds difficulties to the learning of domain-invariant features. Accordingly, we integrate several important techniques from domain adversarial learning. To improve context in-variance, we propose a part-oriented feature query tech-nique that mainly focuses on foreground parts and ignores the background. To handle diverse part sizes, we propose a multi-resolution technique. Finally, we employ the fo-cal loss to handle the distribution imbalance. Our method significantly outperforms previous 3D instance segmenta-tion methods and achieves 76.5% AP50 on seen object cat-egories and 37.2% AP50 on unseen categories. To summarize, our main contributions are as follows: 1. We provide the concept of GAPart and present a large-scale interactive dataset, GAPartNet , with rich part seman-tics and pose annotations that facilitates generalizable part perception and part-based object manipulation.
Deng_NeRDi_Single-View_NeRF_Synthesis_With_Language-Guided_Diffusion_As_General_Image_CVPR_2023
Abstract 2D-to-3D reconstruction is an ill-posed problem, yet hu-mans are good at solving this problem due to their prior knowledge of the 3D world developed over years. Driven by this observation, we propose NeRDi , a single-view NeRF synthesis framework with general image priors from 2D diffusion models. Formulating single-view reconstruction as an image-conditioned 3D generation problem, we op-timize the NeRF representations by minimizing a diffusion loss on its arbitrary view renderings with a pretrained im-age diffusion model under the input-view constraint. We leverage off-the-shelf vision-language models and introduce a two-section language guidance as conditioning inputs to the diffusion model. This is essentially helpful for improving multiview content coherence as it narrows down the general image prior conditioned on the semantic and visual features of the single-view input image. Additionally, we introduce a geometric loss based on estimated depth maps to regular-ize the underlying 3D geometry of the NeRF . Experimentalresults on the DTU MVS dataset show that our method can synthesize novel views with higher quality even compared to existing methods trained on this dataset. We also demon-strate our generalizability in zero-shot NeRF synthesis for in-the-wild images.
1. Introduction Novel view synthesis is a long-existing problem in com-puter vision and computer graphics. Recent progresses in neural rendering such as NeRFs [22] have made huge strides in novel view synthesis. Given a set of multi-view images with known camera poses, NeRFs represent a static 3D scene as a radiance field parametrized by a neu-ral network, which enables rendering at novel views with the learned network. A line of work has been focusing on reducing the required inputs to NeRF reconstructions, ranging from dense inputs with calibrated camera poses to sparse images [11, 25, 52] with noisy or without camera *Work done as an intern at Waymo. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20637 poses [47]. Yet the problem of NeRF synthesis from one single view remains challenging due to its ill-posed nature, as the one-to-one correspondence from a 2D image to a 3D scene does not exist. Most existing works formulate this as a reconstruction problem and tackle it by training a net-work to predict the NeRF parameters from the input im-age [8, 52]. But they require matched multiview images with calibrated camera poses as supervision, which is in-accessible in many cases such as images from the Internet or captured by non-expert users with mobile devices. Re-cent attempts have been focused on relaxing this constraint by using unsupervised training with novel-view adversarial losses and self-consistency [21, 51]. But they still require the test cases to follow the training distribution which lim-its their generalizability. There is also work [44] that ag-gregates priors learned on synthetic multi-view datasets and transfers them to in-the-wild images using data distillation. But they are missing fine details with poor generalizability to unseen categories. Despite the difficulty of 2D-to-3D mapping for comput-ers, it is actually not a difficult task for human beings. Hu-mans gain knowledge of the 3D world through daily ob-servations and form a common sense of how things should look like and should not look like. Given a specific image, they can quickly narrow down their prior knowledge to the visual input. This makes humans good at solving ill-posed perception problems like single-view 3D reconstruction. In-spired by this, we propose a single-image NeRF synthe-sis framework without 3D supervision by leveraging large-scale diffusion-based 2D image generation model (Figure 1). Given an input image, we optimize for a NeRF by mini-mizing an image distribution loss for arbitrary-view render-ings with the diffusion model conditioned on the input im-age. An unconstrained image diffusion is the ‘general prior’ which is inclusive but also vague. To narrow down the prior knowledge and relate it to the input image, we design a two-section semantic feature as the conditioning input to the dif-fusion model. The first section is the image caption which carries the overall semantics; the second is a text embed-ding extracted from the input image with textual inversion [9], which captures additional visual cues. These two sec-tions of language guidance facilitate our realistic NeRF syn-thesis with semantic and visual coherence between different views. In addition, we introduce a geometric loss based on the estimated depth of the input view for regularizing the underlying 3D structure. Learned with all the guidance and constraints, our model is able to leverage the general image prior and perform zero-shot NeRF synthesis on single im-age inputs. Experimental results show that we can generate high quality novel views from diverse in-the-wild images. To summarize, our key contributions are: We formulate single-view reconstruction as a condi-tioned 3D generation problem and propose a single-image NeRF synthesis framework without 3D supervi-sion, using 2D priors from diffusion models trained on large image datasets. We design a two-section semantic guidance to narrow down the general prior knowledge conditioned on the in-put image, enforcing synthesized novel views to be se-mantically and visually coherent. We introduce a geometric regularization term on esti-mated depth maps with 3D uncertainties. We validate our zero-shot novel view synthesis results on the DTU MVS [12] dataset, achieving higher quality than supervised baselines. We also demonstrate our capabil-ity of generating novel-view renderings with high visual quality on in-the-wild images.
Dessalene_Therbligs_in_Action_Video_Understanding_Through_Motion_Primitives_CVPR_2023
Abstract In this paper we introduce a rule-based, compositional, and hierarchical modeling of action using Therbligs as our atoms. Introducing these atoms provides us with a con-sistent, expressive, contact-centered representation of ac-tion. Over the atoms we introduce a differentiable method of rule-based reasoning to regularize for logical consistency. Our approach is complementary to other approaches in that the Therblig-based representations produced by our archi-tecture augment rather than replace existing architectures’ representations. We release the first Therblig-centered an-notations over two popular video datasets -EPIC Kitchens 100 and 50-Salads. We also broadly demonstrate bene-fits to adopting Therblig representations through evalua-tion on the following tasks: action segmentation, action anticipation, and action recognition -observing an aver-age 10.5%/7.53%/6.5% relative improvement, respectively, over EPIC Kitchens and an average 8.9%/6.63%/4.8% rel-ative improvement, respectively, over 50 Salads. Code and data will be made publicly available.
1. Introduction We propose the use of Therbligs -a low-level mutu-ally exclusive contact demarcated set of sub-actions. These Therbligs are consistent in that a given action segment has only a single Therblig representation, and Therbligs are ex-pressive in that they capture the meaningful physical as-pects of action relevant to action modeling. Therbligs were introduced in the early 20th century as a set of 18elemental motions used to analyze complex movement -see the Sup-plementary Materials for a brief historical background. We adopt 7Therbligs pertaining to those involving the manipu-lation of objects. See Figure 1 for our Therblig set. The benefits of our Therblig-centered framework in-clude: compositionality & hierarchy; rule-based reasoning; resolution of semantic ambiguity; contact-centered preci-sion of temporal boundaries of action. Contact transitions demarcate Therblig boundaries, giv-ing Therbligs a consistency which methods relying on anno-tators’ intuited demarcations lack1. Between points of con-tact exist contact states represented by a binary class (con-tact, no contact) for each object present, which are wholly captured by Therbligs. As objects in contact are the pri-mary objects of interaction and define the space of possible actions, they provide meaningful information for the mod-eling of action. Therblig atoms are then composable into higher entities, including full actions. These actions are in turn compos-able into sequences constituting activities. We then have the hierarchy of representation illustrated in Figure 2. At the lowest, and instantaneous, level are points of contact, 1See [1] for a case study on how annotators have difficulties coming to a consensus on when actions begin and end. Figure 1. Listed above are the Therbligs we select, their symbolic illustrations as introduced by Gilbreths, and brief descriptions of their usage. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10618 Figure 2. We introduce the use of Therbligs ( ti) in video understanding as a consistent, expressive, symbolic representation of sub-action. Points of Contact (indicated by the divider dashes) are necessarily associated with Therbligs and/or their boundaries. Because of the unambiguity of Points of Contact, Therblig boundaries gain precision and are non-overlapping. On top of Therblig atoms we construct a framework for Rule Enforcement, enforcing greater logical consistency through commonsense rules. This rule-based framework allows for the easy introduction of long-term constraints. Therblig atoms are then composable into actions ( ai), which are in turn composable into activities. between which exist Therbligs with temporal extension, on top of which exist action, permutations of which constitute longer activities. Architectures built upon Therbligs for the modeling of action gain temporal precision through points of contact as well as meaningful information captured by contact states. Therbligs also exhibit semantic mutual exclusivity in that there is one and only one Therblig interpretation of a se-quence, as opposed to the many interpretations when action labels are intuited2, leading to semantic ambiguity. As a consequence of Therbligs, semantic ambiguity at the action-level is constrained by the deeper grounding of action labels in explicit action dynamics (see Figure 2). Unlike higher level actions, Therbligs enable the im-posing of a contact-based logic defining preconditions and postconditions in the form of states of contact before and after Therbligs. For example, an object being moved must be preceded by grasp and proceeded by a release . The rules of this logic interface at the Therblig level of the hierarchy. These rules allow for biasing towards consistency between contact states and Therblig predictions within a loss term (see Section 3.2.2), and provide constraints over possible action sequences (see Section 3.2.1). In producing sub-action level symbolic representations, our proposed hierarchical architecture is comprised of two main components; the Therblig-Model , which maps video to Therbligs; and, the Action-Model , which maps video and Therbligs to actions. The Therblig-Model is optimized over a loss including structure-aware terms for contact con-sistency and Therblig consistency by incorporating differ-entiable reasoning. Figure 3 illustrates our architecture. 2Some additional structure is needed for complete mutual exclusivity -see the Supplementary Materials for discussion on this structure.This architecture is complementary to, rather than in com-petition with, existing architectures for action modeling -Therblig representations can be easily integrated through concatenation with existing feature representations. We demonstrate this with two state-of-the-art approaches to ac-tion segmentation -MSTCN++ [15] and ASFormer [22] and four popular approaches to action recognition -I3D [5], ViViT [2], TimeSFormer [3], and MoViNet [13]. We evaluate our approach over the tasks of action segmentation, action recognition, and action anticipation. We evaluate over the EPIC Kitchens 100 and 50-Salads datasets. The primary contributions of our work are as follows: • Therbligs, a consistent, expressive symbolic represen-tation of sub-action centered on contact. • Rules: Flexible and differentiable constraining of in-tuitive constraints on arrangement of atomic actions, informed by commonsense rules. • Novel hierarchical architecture composed of a Therblig-Model and Action-Model. Representations produced by the Therblig-Model can be easily inte-grated into other approaches, as we demonstrate with six popular action understanding approaches. • Dataset: We release the first Therblig-centered annota-tions over two popular video datasets. The rest of this paper is structured as follows: Section 2 discusses related works, Section 3 introduces our proposed method, Section 4 describes the experiments, in Section 5 we provide discussion and in Section 6 we conclude. 10619 Figure 3. Architectural diagram of our framework. Therblig-Model takes a stack of K= 100 frames as input, feeding them to various backbone video architectures followed by a 2-layer GRU (Backbone + LSTM), which in turn produces hidden states ht i, passing through allT/100stacks of the video. Hidden states ht iare fed to fully connected layers, followed by a Gumbel-Softmax operation, producing Therblig predictions amenable to differentiable reasoning. Action-Model takes a sliding window with window size Wover the original video sequence with stride s, both values depending on the choice of architecture. These windows are fed to ϕ, an attention mechanism consisting of a 2layer MLP -this MLP attends over the hidden states produced by Therblig-Model . The blended features produced by ϕ are fed together with the video window to a (Video Network) predicting action class likelihood a. See Sections 3.1.1 and 3.1.2 for details.
Jiang_InstantAvatar_Learning_Avatars_From_Monocular_Video_in_60_Seconds_CVPR_2023
Abstract In this paper, we take one step further towards real-world applicability of monocular neural avatar reconstruction by contributing InstantAvatar, a system that can reconstruct human avatars from a monocular video within seconds, and these avatars can be animated and rendered at an inter-active rate. To achieve this efficiency we propose a care-fully designed and engineered system, that leverages emerg-ing acceleration structures for neural fields, in combination with an efficient empty-space skipping strategy for dynamic scenes. We also contribute an efficient implementation that we will make available for research purposes. Compared to existing methods, InstantAvatar converges 130 ×faster and can be trained in minutes instead of hours. It achieves comparable or even better reconstruction quality and novel pose synthesis results. When given the same time budget, our method significantly outperforms SoTA methods. In-stantAvatar can yield acceptable visual quality in as little as 10 seconds training time. For code and more demo re-sults, please refer to https://ait.ethz.ch/InstantAvatar. *Equal contribution. †Corresponding author1. Introduction Creating high-fidelity digital humans is important for many applications including immersive telepresence, AR/VR, 3D graphics, and the emerging metaverse. Cur-rently acquiring personalized avatars is an involved process that typically requires the use of calibrated multi-camera systems and incurs significant computational cost. In this paper, we embark on the quest to build a system for the learning of 3D virtual humans from monocular video alone that is lightweight enough to be widely deployable and fast enough to allow for walk-up and use scenarios. The emergence of powerful neural fields has enabled a number of methods for the reconstruction of animatable avatars from monocular videos of moving humans [1, 2, 6, 49, 62]. These methods typically model human shape and appearance in a pose-independent canonical space. To reconstruct the model from images that depict hu-mans in different poses, such methods must use anima-tion (e.g. skinning) and rendering algorithms, to deform and render the model into posed space in a differentiable way. This mapping between posed and canonical space al-lows optimization of network weights by minimizing the difference between the generated pixel values and real im-ages. Especially methods that leverage neural radiance This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16922 fields (NeRFs) [40] as the canonical model has demon-strated high-fidelity avatar reconstruction results. However, due to the dual need for differentiable deformation mod-ules and for volume rendering, these models require hours of training time and cannot be rendered at interactive rates, prohibiting their broader application. In this paper, we aim to take a further step toward real-world applicability of monocular neural avatar reconstruc-tion by contributing a method that takes no longer for recon-struction, than it takes to capture the input video. To this end, we propose InstantAvatar, a system that reconstructs high-fidelity avatars within 60 seconds, instead of hours, given a monocular video, pose parameters and masks. Once learned the avatar can be animated and rendered at interac-tive rates. Achieving such a speed-up is clearly a challeng-ing task that requires careful method design, requires fast differentiable algorithms for rendering and articulation, and requires efficient implementation. Our simple yet highly efficient pipeline combines sev-eral key components. First, to learn the canonical shape and appearance we leverage a recently proposed neural radiance field variant [42]. Instant-NGP [42] accelerates neural vol-ume rendering by replacing multi-layer perceptrons (MLPs) with a more efficient hash table as data structure. How-ever, because the spatial features are represented explicitly, Instant-NGP is limited to rigid objects. Second, to enable learning from posed observations and to be able to animate the avatar, we interface the canonical NeRF with an efficient articulation module, Fast-SNARF [7], which efficiently de-rives a continuous deformation field to warp the canonical radiance field into the posed space. Fast-SNARF is orders of magnitude faster compared to its slower predecessor [9]. Finally, simply integrating existing acceleration tech-niques is not sufficient to yield the desired efficiency. With acceleration structures for the canonical space and a fast ar-ticulation module in place, rendering the actual volume be-comes the computational bottleneck. To compute the color of a pixel, standard volume rendering needs to query and accumulate densities of hundreds of points along the ray. A common approach to accelerating this is to maintain an occupancy grid to skip samples in the empty space. How-ever, such an approach assumes rigid scenes and can not be applied to dynamic scenes such as humans in motion. We propose an empty space skipping scheme that is de-signed for dynamic scenes with known articulation patterns. At inference time, for each input body pose, we sample points on a regular grid in posed space and map them back to the canonical model to query densities. Thresholding these densities yields an occupancy grid in canonical space, which can then be used to skip empty space during volume rendering. For training, we maintain a shared occupancy grid over all training frames, recording the union of occu-pied regions over individual frames. This occupancy grid isupdated every few training iterations with the densities of randomly sampled points, in the posed space of randomly sampled frames. This scheme balances computational effi-ciency and rendering quality. We evaluate our method on both synthetic and real monocular videos of moving humans and compare it with state-of-the-art methods on monocular avatar reconstruc-tion. Our method achieves on-par reconstruction quality and better animation quality in comparison to SoTA meth-ods, while only requiring minutes of training time instead of more than 10 hours. When given the same time budget, our method significantly outperforms SoTA methods. We also provide an ablation study to demonstrate the effect of our system’s components on speed and accuracy. 2. Related Work 3D Human Reconstruction Reconstructing 3D human appearance and shape is a long-standing problem. High-quality reconstruction has been achieved in [12, 15, 19, 36] by fusing observations from a dense array of cameras or depth sensors. The expensive hardware requirement limits such methods to professional settings. Recent work [1,2,17, 20, 21, 28, 65] demonstrates 3D human reconstruction from a monocular video by leveraging personalized or generic template mesh models such as SMPL [35]. These meth-ods reconstruct 3D humans by deforming the template to fit 2D joints and silhouettes. However, personalized template mesh might not be available in many scenarios and generic template mesh cannot model high-fidelity details and differ-ent clothing typologies. Recently, neural representations [37, 41, 45, 46] have emerged as a powerful tool to model 3D humans [3,6,8,10, 11, 13, 14, 22–26, 30, 31, 34, 38, 39, 39, 43, 44, 48, 49, 52, 53, 57, 59–63, 63, 64, 67, 69, 70]. Using neural representations, many works [6, 18, 26, 27, 30, 34, 43, 48, 49, 61, 62, 64, 69] can directly reconstruct high fidelity neural human avatars from a sparse set of views or a monocular video without pre-scanning personalized template. These methods model 3D human shape and appearance via neural radiance field [40] or signed distance and texture field in a pose-independent canonical space and then deform and render the model into various body poses in order to learn from posed observa-tions. While achieving impressive quality and can learn avatars from a monocular video, these methods suffer from slow training and rendering speed due to the slow speed of the canonical representation as well as deformation algo-rithms. Our method addresses this issue and enables learn-ing avatars within minutes. Accelerating Neural Radiance Field Several methods have been proposed to improve the training and inference speed of neural representations [5, 16, 29, 32, 33, 42, 51, 16923 54–56, 66]. The core idea is to replace MLPs in neu-ral representations with more efficient representations. A few works [33, 54, 66] propose to use voxel grids to rep-resent neural fields and achieve fast training and inference speed. Instant-NGP [42] further replaces dense voxels with a multi-resolution hash table, which is more memory ef-ficient and hence can record high-frequency details. Be-sides improving the efficiency of the representation, several works [29, 32, 42] also improve the rendering efficiency by skipping empty space via an occupancy grid to further in-crease training and inference speed. While achieving impressive quality and training effi-ciency, these methods are specifically designed for rigid ob-jects. Generalizing these methods to non-rigid objects is not straightforward. We combine Instant-NGP with a recent ar-ticulation algorithm to enable animation and learning from posed observations. In addition, we propose an empty space skinning scheme for dynamic articulated humans. 3. Method Given a monocular video of a moving human, our pri-mary goal is to reconstruct a 3D human avatar within a tight computational budget. In this section, we first describe the preliminaries that our method is based on (Sec. 3.1), which include an accelerated neural radiance field that we use to model the appearance and shape in canonical space and an efficient articulation module to deform the canonical radiance field into posed space. We then describe our im-plementation of the volumetric renderer to produce images from the radiance fields in an efficient man
ner (Sec. 3.2). To avoid inefficient sampling of empty space, we leverage the observation that the 3D bounding box around the hu-man body is dominated by empty space. We then propose an empty space skipping scheme specifically designed for humans (Sec. 3.3). Finally, we discuss training objectives and regularization strategies (Sec. 3.4). 3.1. Preliminaries Efficient Canonical Neural Radiance Field We model human shape and appearance in a canonical space using a radiance field fσf, which predicts the density σand color c of each 3D point xin the canonical space: fσf:R3→R+,R3(1) x7→σ, c (2) where σfare the parameters of the radiance field. We use Instant-NGP [42] to parameterize fσf, which achieves fast training and inference speed by using a hash table to store feature grids at different coarseness scales. To predict the texture and geometry properties of a query point in space, they read and tri-linearly interpolate the features atits neighboring grid points and then concatenate the interpo-lated features at different levels. The concatenated features are finally decoded with a shallow MLP. Articulating Radiance Fields To create animations and to learn from posed images, we need to generate deformed radiance fields in target poses f′ σf. The posed radiance field is defined as f′ σf:R3→R+,R3(3) x′7→σ, c, (4) which outputs color and density for each point in posed space. We use a skinning weight field win canonical space to model articulation, with σwbeing its parameters: wσw:R3→Rnb, (5) x7→w1, ..., w nb. (6) where nbis the number of bones in the skeleton. To avoid the computational cost of [9], [7] represents this skinning weight field as a low-resolution voxel grid. The value of each grid point is determined as the skinning weights of its nearest vertex on the SMPL [35] model. With this the canonical skinning weight field and target bone transforma-tionsB={B1, ...,Bnb}, a point xin canonical space is transformed to deformed space x′via linear blend skinning as follows: x′=Pnb i=1wiBix (7) The canonical correspondences x∗of a deformed point x′ are defined by the inverse mapping of Equation. 7. The key is to establish the mapping from points in posed space x′ to their correspondences in the canonical space x∗. This is efficiently derived by root-finding in Fast-SNARF [7]. The posed radiance field f′ σfcan then be determined as f′ σf(x′) =fσf(x∗). 3.2. Rendering Radiance Fields The articulated radiance field f′ σfcan be rendered into novel views via volume rendering. Given a pixel, we cast a rayr=o+tdwithobeing the camera center and dbe-ing the ray direction. We sample Npoints {x′ i}Nalong the ray between the near and far bound, and query the color and density of each point from the articulated radiance field f′ σf by mapping {x′ i}Nback to the canonical space and query-ing from the canonical NeRF model fσf, as illustrate in Fig. 2. We then accumulate queried radiance and density along the ray to get the pixel color C C=NX i=1αiY j<i(1−αj)ci,with αi= 1−exp(σiδi)(8) 16924 !={!!,!",…,!#!}RotationTranslationBody Pose CameraPosed SpaceNormalized SpaceCanonical SpaceInstant-NGPEmpty Space Skipping'c,σ*′*Find $such that LBS$,),*=$′Fast-SNARF *,*′Hash TabelHash TabelInterpRigid TransformLBS…Figure 2. Method Overview. For each frame, we sample points along the rays in posed space. We then transform these points into a normalized space where the global orientation and translation are removed, we then filter points in empty space using our occupancy grid. The remaining points are deformed to canonical space using an articulation module and then fed into the canonical neural radiance field to evaluate the color and density. where δi=∥x′ i+1−x′ i∥is the distance between samples. While the acceleration modules of Sec. 3.1 already achieve significant speed-up over the vanilla variants (NeRF [40], SNARF [9]), the rendering itself now becomes the bottleneck. In this paper, we optimize the process of neural rendering, specifically for the use-case of dynamic humans. 3.3. Empty Space Skipping for Dynamic Objects We note that the 3D bounding box surrounding the hu-man body is dominated by empty space due to the articu-lated structure of 3D human limbs. This results in a large amount of redundant sample queries during rendering and hence significantly slows down rendering. For rigid objects, this problem is eliminated by caching a coarse occupancy grid and skipping samples within non-occupied grid cells. However, for dynamic objects, the exact location of empty space varies across different frames, depending on the pose. Inference Stage At inference time, for each input body pose, we sample points on a 64×64×64grid in posed space and query their densities from the posed radiance field f′ σf. We then threshold these densities into binary occu-pancy values. To remove cells that have been falsely la-beled as empty, due to the low spatial resolution, we dilate the occupied region to fully cover the subject. Due to the low resolution of this grid and the large amount of queries required to render an image, the overhead to construct such an occupancy grid is negligible. During volumetric render-ing, for point samples inside the non-occupied cells, we di-rectly set their density to zero without querying the posed radiance field f′ σf. This reduces unnecessary computation to a minimum and hence improves the inference speed. Training Stage During training, however, the overhead to construct such an occupancy grid at each training iterationis no longer negligible. To avoid this overhead, we con-struct a single occupancy grid for the entire sequence by recording the union of occupied regions in each of the in-dividual frames. Specifically, we build an occupancy grid at the start of training and update it every kiterations, by taking the moving average of the current occupancy values and the densities queried from the posed radiance field f′ σf at the current iteration. Note that this occupancy grid is de-fined in a normalized space where the global orientation and translation are factored out so that the union of the occupied space is as tight as possible and hence unnecessary queries are further reduced. 3.4. Training Losses We train our model by minimizing the robust Huber loss ρbetween the predicted color of the pixels Cand the corre-sponding ground-truth color Cgt: Lrgb=ρ(∥C−Cgt∥) (9) In addition, we assume an estimate of the human mask is available and apply a loss on the rendered 2D alpha values, in order to reduce floating artifacts in space. Lalpha=∥α−αgt∥1 (10) Hard Surface Regularization Following [50], we add further regularization to encourage the NeRF model to pre-dict solid surfaces: Lhard=−log(exp−|α|+ exp−|α−1|) +const. (11) where const. is a constant to ensure loss value to be non-negative. Encouraging solid surfaces helps to speed up ren-dering because we can terminate rays early once the accu-mulated opacity reaches 1. 16925 male-3-casual male-4-casual female-3-casual female-4-casual PSNR↑SSIM↑LPIPS↓PSNR↑SSIM↑LPIPS↓PSNR↑SSIM↑LPIPS↓PSNR↑SSIM↑LPIPS↓ Neural Body [49] ( ∼14 hours) 24.94 0.9428 0.0326 24.71 0.9469 0.0423 23.87 0.9504 0.0346 24.37 0.9451 0.0382 Anim-NeRF [6] ( ∼13 hours) 29.37 0.9703 0.0168 28.37 0.9605 0.0268 28.91 0.9743 0.0215 28.90 0.9678 0.0174 Ours (1 minute) 29.65 0.9730 0.0192 27.97 0.9649 0.0346 27.90 0.9722 0.0249 28.92 0.9692 0.0180 Anim-NeRF [6] (5 minutes) 23.17 0.9266 0.0784 22.30 0.9235 0.0911 22.37 0.9311 0.0784 23.18 0.9292 0.0687 Ours (5 minutes) 29.53 0.9716 0.0155 27.67 0.9626 0.0307 27.66 0.9709 0.0210 29.11 0.9683 0.0167 Anim-NeRF [6] (3 minutes) 19.75 0.8927 0.1286 20.66 0.8986 0.1414 19.77 0.9003 0.1255 20.20 0.9044 0.1109 Ours (3 minutes) 29.58 0.9719 0.0157 27.83 0.9640 0.0342 27.68 0.9708 0.0217 29.05 0.9689 0.0263 Anim-NeRF [6] (1 minute) 12.39 0.7929 0.3393 13.10 0.7705 0.3460 11.71 0.7797 0.3321 12.31 0.8089 0.3344 Ours (1 minute) 29.65 0.9730 0.0192 27.97 0.9649 0.0346 27.90 0.9722 0.0249 28.92 0.9692 0.0180 Table 1. Qualitative Comparison with SoTA on the PeopleSnapshot [1] dataset. We report PSNR, SSIM and LPIPS [68] between real images and the images generated by our method and two SoTA methods, Neural Body [49] and Anim-NeRF [6]. We compare all three methods at their convergence, and also compare ours with Anim-NeRF at 5 minutes, 3 minutes and 1 minute training time. Occupancy-based regularization Previous methods for the learning of human avatars [6, 27] often encourage mod-els to predict zero density for points outside of the surface and solid density for points inside the surface by leverag-ing the SMPL body model as regularizer. This is done to reduce artifacts near the body surface. However such regu-larization makes heavy assumptions about the shape of the body and does not generalize well for loose clothing. More-over, we empirically found this regularization is not effec-tive in removing artifacts near the body. This can be seen in Fig. 3. Instead of using SMPL for regularization, we use our occupancy grid which is a more conservative estimate of the shape of the subject and the clothing, and define an additional loss Lregwhich encourages the points inside the empty cells of the occupancy grid to have zero density: Lreg=( |σ(x)|ifxis in the empty space 0 otherwise(12) 4. Experiments We evaluate the accuracy and speed of our method on monocular videos and compare it with other SoTA methods. In addition, we provide an ablation study to investigate the effect of individual technical contributions. Datasets PeopleSnapshot We conduct experiments on the Peo-pleSnapshot [1] dataset, which contains videos of humans rotating in front of a camera. We follow the evaluation pro-tocol defined in Anim-NeRF [6]. The pose parameters pro-vided in this dataset are obtained using SMPLify [4], which do not always align with images. Hence, Anim-NeRF [6] optimizes the poses of training and test frames. For a fair quantitative comparison in Tab. 1, we train our model with the pose parameters optimized by Anim-NeRF and keep them frozen t
Hu_You_Only_Segment_Once_Towards_Real-Time_Panoptic_Segmentation_CVPR_2023
Abstract In this paper, we propose YOSO, a real-time panoptic segmentation framework. YOSO predicts masks via dy-namic convolutions between panoptic kernels and image feature maps, in which you only need to segment once for both instance and semantic segmentation tasks. To reduce the computational overhead, we design a feature pyramid aggregator for the feature map extraction, and a separa-ble dynamic decoder for the panoptic kernel generation. The aggregator re-parameterizes interpolation-first mod-ules in a convolution-first way, which significantly speeds up the pipeline without any additional costs. The decoder performs multi-head cross-attention via separable dynamic convolution for better efficiency and accuracy. To the best of our knowledge, YOSO is the first real-time panoptic seg-mentation framework that delivers competitive performance compared to state-of-the-art models. Specifically, YOSO achieves 46.4 PQ, 45.6 FPS on COCO; 52.5 PQ, 22.6 FPS on Cityscapes; 38.0 PQ, 35.4 FPS on ADE20K; and 34.1 PQ, 7.1 FPS on Mapillary Vistas. Code is available at https://github.com/hujiecpp/YOSO .
1. Introduction Panoptic segmentation is a task that involves assigning a semantic label and an instance identity to each pixel of an input image. The semantic labels are typically classified into two types, i.e.,stuff including amorphous and uncount-able concepts (such as sky and road), and things consisting of countable categories (such as persons and cars). This di-vision of label types naturally separates panoptic segmenta-tion into two sub-tasks: semantic segmentation for stuff and instance segmentation for things . Thus, one of the major challenges for achieving real-time panoptic segmentation is the requirement for separate and computationally intensive branches to perform semantic and instance segmentation re-spectively. Typically, instance segmentation employs boxes or points to distinguish between different things , while se-*Corresponding authormantic segmentation predicts distribution maps over seman-tic categories for stuff. As shown in Fig. 1, numerous ef-forts [9,16,23,24,31,45] have been made to unify panoptic segmentation pipelines for improved speed and accuracy. However, achieving real-time panoptic segmentation still remains an open problem. On the one hand, heavy necks, e.g., the multi-scale feature pyramid network (FPN) used in [27, 54], and heads, e.g., the Transformer decoder used in [10, 58], are required to ensure accuracy, making real-time processing unfeasible. On the other hand, reducing the model size [16, 23, 24] leads to a decrease in model gener-alization. Therefore, developing a real-time panoptic seg-mentation framework that delivers competitive accuracy is challenging yet highly desirable. In this paper, we present YOSO, a real-time panoptic segmentation framework. YOSO predicts panoptic kernels to convolute image feature maps, with which you only need to segment once for the masks of background stuff and foreground things . To make the process lightweight, we design a feature pyramid aggregator for extracting image feature maps, and a separable dynamic decoder for gen-erating panoptic kernels. In the aggregator, we propose convolution-first aggregation (CFA) to re-parameterize the interpolation-first aggregation (IFA), resulting in an approx-imately 2.6 ×speedup in GPU latency without compromis-ing performance. Specifically, we demonstrate that the or-der,i.e., interpolation-first or convolution-first, of applying bilinear interpolation and 1 ×1 convolution (w/o bias) does not affect results, but the convolution-first way provides a considerable speedup to the pipeline. In the decoder, we propose separable dynamic convolution attention (SDCA) to perform multi-head cross-attention in a weight-sharing way. SDCA achieves better accuracy (+1.0 PQ) and higher efficiency (approximately 1.2 ×faster GPU latency) than traditional multi-head cross-attention. In general, YOSO has three notable advantages. First, CFA reduces computational burden without re-training the model or compromising performance. CFA can be adapted to any task that uses the combination of bilinear interpola-tion and 1 ×1 convolution operations. Second, SDCA per-forms multi-head cross-attention with better accuracy and This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17819 Sem. Seg. Ins. Seg. FPN Stuff Things PanopticMasks Encoder Sem. Seg. Boxes/Points Stuff Things Obj. Det. PanopticMasks Sem. Masks** FPN FeatureMap Stuff Things PanopticMasks Ins. Kernels Sem. Kernels* FeatureMap Separable DynamicDecoder PanopticKernels PanopticMasks Feature Pyramid Aggregator (a) Sem. Seg. + Ins. Seg. (b) Sem. Seg. + Obj. Det.(c) Sem. & Ins. Kernels (d) YOSO Figure 1. Towards real-time panoptic segmentation. (a) Semantic and instance segmentation are performed using shared FPN but separated task branches ( e.g., in PanopticFPN [27] and UPSNet [54]). (b) Semantic segmentation generates masks for all categories, and instance recognition is achieved by object detection using boxes or points ( e.g., in RealTimePan [24] and PanopticDeepLab [9]). (c) Kernels for stuff andthings are generated to convolute image feature maps via heavy modules ( e.g., in PanopticFCN [31], K-Net [58], and MaskFormer [10,11]). (d) YOSO employs an efficient feature pyramid aggregator and a lightweight separable dynamic decoder to produce image feature maps and panoptic kernels. The figures do not include input images and backbone for concision. efficiency. Third, YOSO runs faster and has competitive ac-curacy compared to state-of-the-art panoptic segmentation models, and its generalization is validated on four popular datasets: COCO (46.4 PQ, 45.6 FPS), Cityscapes (52.5 PQ, 22.6 FPS), ADE20K (38.0 PQ, 35.4 FPS), and Mapillary Vistas (34.1 PQ, 7.1 FPS).
Chen_ViewNet_A_Novel_Projection-Based_Backbone_With_View_Pooling_for_Few-Shot_CVPR_2023
Abstract Although different approaches have been proposed for 3D point cloud-related tasks, few-shot learning (FSL) of 3D point clouds still remains under-explored. In FSL, un-like traditional supervised learning, the classes of training and test data do not overlap, and a model needs to rec-ognize unseen classes from only a few samples. Existing FSL methods for 3D point clouds employ point-based mod-els as their backbone. Yet, based on our extensive experi-ments and analysis, we first show that using a point-based backbone is not the most suitable FSL approach, since (i) a large number of points’ features are discarded by the max pooling operation used in 3D point-based backbones, decreasing the ability of representing shape information; (ii) point-based backbones are sensitive to occlusion. To address these issues, we propose employing a projection-and 2D Convolutional Neural Network-based backbone, re-ferred to as the ViewNet, for FSL from 3D point clouds. Our approach first projects a 3D point cloud onto six dif-ferent views to alleviate the issue of missing points. Also, to generate more descriptive and distinguishing features, we propose View Pooling, which combines different projected plane combinations into five groups and performs max-pooling on each of them. The experiments performed on the ModelNet40, ScanObjectNN and ModelNet40-C datasets, with cross validation, show that our method consistently outperforms the state-of-the-art baselines. Moreover, com-pared to traditional image classification backbones, such as ResNet, the proposed ViewNet can extract more distinguish-ing features from multiple views of a point cloud. We also show that ViewNet can be used as a backbone with different FSL heads and provides improved performance compared to traditionally used backbones. *The information, data, or work presented herein was funded in part by National Science Foundation under Grant 1816732 and Federal Highway Administration Exploratory Advanced Research Program under Agree-ment No. 693JJ31950022. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Govern-ment or any agency thereof.1. Introduction 3D point cloud data has a wide range of applications including robotics, self driving cars and simultaneous lo-calization and mapping (SLAM). In recent years, different approaches have been proposed for traditional point cloud-related tasks, such as point cloud classification, segmenta-tion and object detection. Yet, few-shot learning of 3D point clouds remains relatively under-explored. In contrast to structured 2D images, a 3D point cloud is a set of unordered points. Thus, traditional Convolution Neural Networks (CNNs) cannot be directly used with 3D point clouds. To address this, PointNet [14] was proposed, which employs a max pooling operation to obtain permutation invariant fea-tures. This has been shown to be effective in capturing 3D objects’ shape, and could be used for downstream tasks, such as point cloud classification and segmentation. How-ever, in PointNet, each point’s features are learned inde-pendently, and features from neighboring points are not ag-gregated. Thus, later works presented different approaches, wherein a better representation can be learned by incorpo-rating features from neighboring points [15,22,24,25]. De-spite having different network structures, these point-based methods all employ a max pooling module to obtain permu-tation invariant features for the downstream tasks. Traditional supervised learning needs a large number of labeled samples for training, and performs testing on the same classes used in training. In contrast, with few-shot learning (FSL), a model performs prediction on classes, which have not been seen during training, with only a few labeled samples provided in a support set. Let (x, y)denote a point cloud sample and its label. In N-way-K-shot FSL, a support set S={(xi, yi)}N×K i=1 contains Nclasses with K samples for each class. A query set Q={xj}N×q j=1contains the same classes, with qsamples for each class. The model matches each sample in Qwith a sample in Sto predict the labels of query samples. Support and query sets are used both in training and testing. The model gains the ability to learn the similarities between samples from the same class, and dissimilarities between different classes. Existing approaches for FSL from point clouds [26, 27] This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17652 use DGCNN [22], a well-known point-based method, as their backbone due to its simplicity and effectiveness in rep-resenting 3D object shapes. In DGCNN, non-local features are learned for each point by aggregating features from dif-ferent neighbors in each Edge Convolution Layer. At the end of the network, max pooling is performed to obtain per-mutation invariant features, which are then used for the FSL tasks. In this paper, we first show that point-based methods are not the most suitable backbones for FSL for the follow-ing reasons: (i) The representation ability of a point-based method is correlated with the number of points kept after max-pooling [3]. Our extensive experiments show that, in FSL, a point-based backbone utilizes only a small portion of points after max pooling. Considering that classification with FSL is already more challenging than traditional su-pervised classification, it is even more important to make effective use of the available data points. Discarding 3D points during max-pooling decreases the shape representa-tion ability of a point-based approach; (ii) Real-world point cloud data is affected by occlusions and has missing points, and point-based methods are very sensitive to these issues. For instance, almost all point-based methods [14, 15, 22] perform well on the ModelNet40 [23] dataset, which was generated from CAD models, and thus is not affected by missing point issues. On the other hand, the performances of these methods drop on the ScanObjectNN dataset [21], which was collected by scanning real-world objects. To address the aforementioned issues, instead of a point-based backbone, we propose a 2D projection-based back-bone, referred to as the ViewNet, for FSL of point clouds. The proposed ViewNet is inspired by GaitSet [2], which was proposed for gait recognition from videos. ViewNet is designed by incorporating our proposed novel View Pool-ing, which extracts more descriptive and distinguishing fea-tures from 2D projection images of point clouds, which are then fed into a few-shot head for downstream FSL tasks. More specifically, we project a point cloud into six orthogo-nal planes (front, back, left, right, top and bottom) to gener-ate six depth images by using the SimpleView [7] projection method. Some example depth images are shown in Fig. 2. In addition, we propose View Pooling, which combines dif-ferent projected plane combinations into five groups and performs max-pooling on each of them to generate more descriptive features. The experiments performed on the ModelNet40 [23], ScanObjectNN [21] and ModelNet40-C [18] datasets, with cross validation, show that our pro-posed method consistently outperforms the state-of-the-art (SOTA) on the few-shot point cloud classification task. The main contributions of this work include the following: • We first provide an analysis of the commonly used point-based backbones in terms of point utilization, and argue that they are not well-suited for the FSL task especially with real-word point clouds obtained via scanning.• By visualizing projected depth images of point clouds, we have observed that some projections are robust to missing points and deformations. Motivated by this, we propose the ViewNet, a 2D projection-based backbone, for few-shot point cloud classification. • We propose View Pooling to generate more descriptive and distinguishing features. • Our approach achieves SOTA performance on ScanOb-jectNN, ModelNet40-C and ModelNet40 datasets, and outperforms four different baselines [10, 17, 19, 26] on few-shot point cloud classification task. • Ablation studies show that the proposed ViewNet back-bone can generalize and be employed together with dif-ferent few-shot prediction heads, providing better perfor-mance than a point-based backbone. 2. Related Work Point Cloud Classification: PointNet [14] employs max-pooling to obtain permutation invariant features, which can be used for downstream tasks, such as classification and segmentation. Following works [4,5,15,16,22,24,25] intro-duce different network structures to aggregate information from neighboring points, yet most of them still employ the same max-pooling operation to obtain permutation invariant features. These methods are referred to as the point-based methods. Other methods convert 3D point clouds into 2D images, and use image processing methods to perform pre-diction. SimpleView [7] projects points onto six orthogonal planes to create depth images, and then uses ResNet [8] for classification. Lawin et al. [9] project point clouds onto 120 synthetic 2D images, and feed these images into a CNN. Few-shot Learning: Prototypical Network [17] is a mile-stone FSL work, which learns a metric space, wherein the prediction could be performed by calculating the Euclidean distance between the features of samples in query and sup-port sets. Chen and Wang [6] use discrete cosine transfor-mation to generate a frequency representation. Features of frequency and spatial domain are used together for final pre-diction. Sung et al. [19] propose a module to obtain the re-lation scores between the support and query sets. Currently, most FSL models focus on 2D images, while FSL from 3D point clouds remains under-explored. Zhao et al. [27] pre-sented one of the first works for few-shot semantic poi
nt cloud segmentation, which uses an attention-aware, multi-prototype transductive method. A recent point cloud FSL work [26] uses DGCNN [22] as the backbone, and presents a Cross-Instance Adaption module, which achieves good FSL performance on CAD-based point cloud datasets. 3. Motivation Current point cloud FSL models [26, 27] employ point-based DGCNN as their backbone, to extract features, since 17653 it was shown in [26] that DGCNN outperformed other backbones. Different from these methods, we propose a projection-based backbone for few-shot point cloud clas-sification. For motivation, we first show that DGCNN only keeps a small portion of point features, which can then be used in the FSL task, while completely discarding other points. We then show the sensitivity of point-based DGCNN, as backbone, to occlusions and missing points in point clouds, which are very common for real-world point cloud data. 3.1. Point Utilization Analysis Chen et al. [3] showed that point-based methods, such as PointNet [14], PointNet++ [15] and DGCNN [22], employ a max-pooling module, and use only a portion of points’ features while discarding the other points. If a point has no features participating/used in the set of permutation invari-ant features, this point is referred to as ‘discarded by max pooling’. Chen et al. [3] also showed that these discarded points are actually useful for a task at hand. We first investigate the number of points utilized after max-pooling in DGCNN, for both traditional supervised and few-shot point cloud classification, on ModdelNet40 and ScanObjectNN datasets. For supervised point cloud classification, DGCNN is trained on the training set, and we evaluate the number of points retained by max pooling on testing set directly. For few-shot point cloud classification experiments, we employ the recent work by Ye et al. [26], which provided the SOTA performance with DGCNN as its backbone. The classes in the datasets are split into nfolds, to perform n-fold cross validation. For all the experiments, the number of input points is 1024. 3.1.1 Experiments on the ModelNet40 Dataset ModelNet40 contains objects from 40 classes. For tradi-tional supervised classification (TSC), DGCNN is trained on the training set, which contains 9840 objects, and eval-uated on the testing set containing 2468 objects. For few-shot classification, we sort 40 classes by their class ID in ascending order, and evenly split them into 4 folds, with objects from 10 classes in each fold. Some example ob-jects from the dataset and the experiment results for point utilization are shown in Fig. 1 (a) and top half of Table 1. As can be seen, in all the experiments, the number of points kept after max-pooling in DGCNN, increases at the end of training compared to before training. This indicates that the network is learning to pick up a set of points that can better describe an object’s shape for final prediction. For TSC, 464 points are utilized in DGCNN. For few-shot point cloud classification, on the other hand, a maximum of 416 points are utilized. In FSL, the model performs prediction on classes that were not seen during training, which makes few-shot classification more challenging than TSC, and also Experiments on ModelNet40 Number of points kept after max-pooing Number of points kept after max-pooing(a) (b)Experiments on ScanObjectNN Figure 1. (a) and (b) show example 3D objects and box plots of the number of points used by DGCNN for ModelNet40 and ScanOb-jectNN datasets, respectively. TSC refers to traditional supervised classification, and FS-n represents few-shot point cloud classifica-tion experiment at fold n. The number of input points is 1024 for all experiments. Only about 250 points are utilized by DGCNN before the training. The number of points kept by max-pooling increases after the model is well trained. making it difficult to pick up useful points for prediction af-ter max-pooling. Thus, with the number of points utilized for few-shot classification being less than that for TSC, it is hard to expect DGCNN to extract the best set of features as a backbone to describe 3D objects for FSL. 3.1.2 Experiments on the ScanObjectNN Dataset Different from the ModelNet40 dataset, wherein point clouds are complete and regular, points in ScanObjectNN come from scanning of real-world objects. Thus, missing points are commonly observed as seen in Fig. 1(b). Even with supervised classification, although ScanObjectNN has only 15 classes, all point-based methods [14,15,22] provide worse performance compared to the ModelNet40 dataset. For few-shot point cloud classification, 15 classes are sorted by the class ID in ascending order, and evenly split into 3 folds for cross validation. As shown in Fig. 1(b) and lower half of Tab. 1, for TSC, a well-trained DGCNN only makes use of 397 points and provides an accuracy of 83.10% on ScanObjectNN, which is lower than the 92.51% accuracy obtained with 460 points on ModelNet40. For 17654 few-shot point cloud classification, while a well-trained DGCNN can make use of more than 400 points on Mod-elNet40, it uses less points on ScanObjectNN, and provides lower accuracy. From this, it can be inferred that missing points and deformed shapes can negatively affect the max-pooling, causing it to pick up inadequate points to represent a 3D object’s features and shape. Two strategies can be used to address this problem: (i) the discarded points can be recycled [3] to increase the point utilization, and make the backbone output a better set of features to describe an object’s shape; (ii) the point-based backbone, such as DGCNN, can be replaced with another backbone to output more representative features. In this pa-per, we present an approach based on the second strategy. The reason is that if there are already missing points in the cloud to begin with, they cannot be recycled. Projections onto different view planes provide robustness against this issue, and a backbone analysis using these projections is provided in detail in Sec. 3.2. Dataset Experiment Name MED of no. of kept pnts Accuracy ModelNet40 TSC 252→464 92.51% ModelNet40 FS-0 274→414 89.97% ModelNet40 FS-1 257→390 83.46% ModelNet40 FS-2 246→413 74.08% ModelNet40 FS-3 271→416 76.13% ScanObjectNN TSC 234→397 83.10% ScanObjectNN FS-0 237→363 50.58% ScanObjectNN FS-1 230→391 62.17% ScanObjectNN FS-2 248→400 62.59% Table 1. b→ashows the median value of the number of utilized points before and after training, respectively. TSC is the traditional supervised point cloud classification, and FS-n is few-shot point cloud classification at fold n. 3.2. Point Projection Analysis Occlusion and missing points are common problems with point clouds captured from LiDAR and other scanning devices. Point-based backbones [14, 15, 22] use 3D points as input directly. Thus, missing points and deformation in object shapes negatively affect their performance. However, if 3D points are projected into depth images from different angles, some depth images can be more robust against miss-ing points, as illustrated in Fig. 2, which shows an example from the ModelNet40-C dataset [18]. This dataset contains the same 40 classes as ModelNet40 [23], but in addition to the point clouds formed by sampling a CAD model, the dataset contains point clouds obtained by introducing differ-ent types of common and realistic corruptions. The first row of Fig. 2 shows a point cloud sampled from a CAD model (referred to as Original), and clouds with simulated miss-ing points seen from five different angles. Rows 2 through 5 show the projection images on different planes. For An-gles 1 and 2, the left part of the car is missing. For Angles 3 and 4, the right part of the car is missing. In Angle 5, Original Angle 1 Angle 2 Angle 3 Angle 4 Angle 5 Back View Left ViewRight View Top ViewPointFigure 2. Projections of the original point cloud and of clouds with simulated missing points seen from five different angles. The con-tours of the projections of occluded clouds, shown in red circles, are similar to the ones obtained from the original point cloud. the lower part of the car is occluded. Although the missing portion of the point cloud can be different due to scanning device’s position, some projection images can provide ro-bustness to varying occlusions. For instance, for Angles 1 and 2 in Fig. 2, the object’s contour in left and right pro-jection views and the top view are similar to those obtained from the CAD-based point cloud. Points missing on some part of the object do not affect all the projected views in the same way. For example, when the points from Angle 1 and Angle 2 are in the support set and query set, respec-tively, during the few-shot classification, if the backbone is given the projected depth images, and can focus on the fea-tures from a side view and top view, rather than the back view, there is a better chance of predicting the correct label for the points from Angle 2, compared to using the point clouds themselves directly. Projection-based approach is commonly used for su-pervised point cloud classification. A SOTA approach is presented in [7], which projects points onto six orthogo-nal planes to create sparse depth images, and then uses ResNet [8] to perform the prediction task on depth im-ages. However, for few-shot classification, wherein the model needs to perform prediction on classes that were not seen during training, we argue that a traditional image clas-sification backbone, such as ResNet, may not be able to ex-tract distinguishing features from depth images. Traditional CNN-based backbones are composed of convolution layers and process all depth images separately, without a module for extracting distinguishing features among all views’ fea-ture maps. To show this, we chose to use ProtoNet [17] in our analysis, since ProtoNet is a well known milestone work on FSL, with many following works developed based on it. We used ProtoNet with DGCNN and ResNet as the back-bones,
Huang_Learning_Sample_Relationship_for_Exposure_Correction_CVPR_2023
Abstract Exposure correction task aims to correct the underex-posure and its adverse overexposure images to the normal exposure in a single network. As well recognized, the opti-mization flow is the opposite. Despite great advancement, existing exposure correction methods are usually trained with a mini-batch of both underexposure and overexposure mixed samples and have not explored the relationship be-tween them to solve the optimization inconsistency. In this paper, we introduce a new perspective to con-junct their optimization processes by correlating and con-straining the relationship of correction procedure in a mini-batch. The core designs of our framework consist of two steps: 1) formulating the exposure relationship of samples across the batch dimension via a context-irrelevant pre-text task. 2) delivering the above sample relationship de-sign as the regularization term within the loss function to promote optimization consistency. The proposed sample relationship design as a general term can be easily inte-grated into existing exposure correction methods without any computational burden in inference time. Extensive ex-periments over multiple representative exposure correction benchmarks demonstrate consistent performance gains by introducing our sample relationship design.
1. Introduction The images captured under non-ideal illumination con-ditions, i.e.,underexposure or overexposure scenes, usually suffer from unpleasant visual effects and thus count against the down-streaming vision tasks. To this end, exposure correction techniques have been developed, which aim to correct both underexposure and overexposure images to the normal exposure automatically. It is recognized that a sin-gle algorithm is challenging to take for exposure correction since the mapping flows of correcting underexposure and overexposure are quite different [ 12]. *Equal contributions. †Corresponding authors.Recent years have witnessed explosive advancement only on the single underexposure correction, including con-ventional methods that rely on manually designed strate-gies [ 2,8,10,20,28,31], and deep-learning-driven methods that account for the powerful learning capability of com-plicated neural networks [ 32–34,39], where deep-learning methods have achieved improvement in restoring corrup-tions [ 25,35–37]. Seldom efforts have been devoted to both underexposure and overexposure scenes within a single al-gorithm for meeting the practical application. Very recently, some promising works [ 1,12,14,24] attempt to solve the above issue. Both of them follow the common principle of alleviating the optimization process inconsistency by con-jugating their exposure representations in spatial domain [12,14,24,29] or in frequency transform domain [ 13]. In fact, most above exposure correction approaches are trained with a mini-batch that contains both underexposure and overexposure mixed samples (see Fig. 1). Within the mini-batch, the optimization process of a single network is the opposite. On the other hand, correlating the relationship of samples across the mini-batch could conjunct their opti-mization processes [ 11]. Therefore, by constraining the re-lationship across the batch dimension, the adverse effect of opposite optimization in the mini-batch could be relieved. To this end, in this paper, we introduce a new per-spective that conjuncts the optimization processes across the batch dimension via sample relationship learning and further improves the optimization processes of exposure correction. To achieve this, we construct an exposure-relationship learning (ERL) framework consisting of two steps (see Fig. 2). In the first step (see Fig. 3), we devise a batch-correlation module (BCM) that captures the rela-tionship of samples across the batch dimension. To enable such a relationship focusing on the exposure-related repre-sentations, we train BCM via a pretext task that excludes context information correlation. Then, in the second step (see Fig. 4), we deliver the above sample relationship as the additional training regularization term within the loss func-tion of exposure correction algorithms, where the relation-ship of corrected results is optimized on the trained BCM. In this way, the optimization processes within a mini-batch This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9904 Exposure Correction Network Overexposure ImageUnderexposure Image Corrected Results Ground-truthDarker Brighter Optimized to be darkerOptimized to be brighter >< Figure 1. The illustration of optimization flow of underexposure and overexposure correction. As can be seen, the corrected results of underexposure samples are still obviously darker than normal exposure, while the corrected overexposure samples behave oppo-sitely. This demonstrates the corrected results of underexposure and overexposure are optimized to approach their corresponding ground truth in the opposite direction. are conjunct and the adverse effect of inconsistency opti-mization can be reduced. Our proposed ERL framework is general and could be in-tegrated with existing exposure correction approaches. The above sample relationship regularization is only adopted during the training procedure and does not introduce any computation burden in inference time. Extensive experi-ments on exposure correction datasets demonstrate consis-tent gains by applying our ERL framework. Moreover, it can also be extended to the mixed image enhancement task, demonstrating the extensive capability of our method. We summarize the main contributions of this work as: • This work is the first time to solve inconsistency opti-mization of exposure correction from a new perspec-tive of batch dimension. By exploring the relationship of samples within a mini-batch, their optimization pro-cesses are conjunct to relieve the adverse inconsistency optimization effect. • We propose an exposure relationship learning (ERL) framework to correlate and constrain the relation-ship of corrected samples across the mini-batch. The learned sample relationship acts as an additional reg-ularization term within the loss function to assist the model optimization. • Our ERL framework is general and can be integrated into the existing exposure correction methods without introducing any computation burden during inference. • Extensive experiments over multiple exposure cor-rection datasets demonstrate consistent performance gains by introducing our sample relationship learning mechanism.2. Related Work
Chen_TrojDiff_Trojan_Attacks_on_Diffusion_Models_With_Diverse_Targets_CVPR_2023
Abstract Diffusion models have achieved great success in a range of tasks, such as image synthesis and molecule design. As such successes hinge on large-scale training data collected from diverse sources, the trustworthiness of these collected data is hard to control or audit. In this work, we aim to explore the vulnerabilities of diffusion models under poten-tial training data manipulations and try to answer: How hard is it to perform Trojan attacks on well-trained diffu-sion models? What are the adversarial targets that such Trojan attacks can achieve? To answer these questions, we propose an effective Trojan attack against diffusion mod-els, TrojDiff, which optimizes the Trojan diffusion and gen-erative processes during training. In particular, we de-sign novel transitions during the Trojan diffusion process to diffuse adversarial targets into a biased Gaussian dis-tribution and propose a new parameterization of the Tro-jan generative process that leads to an effective training objective for the attack. In addition, we consider three types of adversarial targets: the Trojaned diffusion models will always output instances belonging to a certain class from the in-domain distribution (In-D2D attack), out-of-domain distribution (Out-D2D-attack), and one specific in-stance (D2I attack). We evaluate TrojDiff on CIFAR-10 and CelebA datasets against both DDPM and DDIM dif-fusion models. We show that TrojDiff always achieves high attack performance under different adversarial targets us-ing different types of triggers, while the performance in be-nign environments is preserved. The code is available at https://github.com/chenweixin107/TrojDiff.
1. Introduction Recently, diffusion models [1–4] have emerged as the new competitive deep generative models, demonstrating their impressive capacities in generating diverse, high-quality samples in various data modalities [5–7]. Inspired by non-equilibrium thermodynamics [8], diffusion mod-els are latent variable models which consist of two pro-cesses. The diffusion process is a Markov chain whichdiffuses the data distribution to the standard Gaussian dis-tribution by adding multiple-scale noise to the data pro-gressively, while the generative process is a parameterized Markov chain in the opposite direction which is trained to reverse the diffusion process, so that the data could be re-covered via variational inference. Based on simple neural network parameterization, diffusion models avoid the draw-backs of the mainstream deep generative models, such as the training instabilities of GANs [9, 10] and the competi-tive log-likelihoods contained in the likelihood-based mod-els like auto-regressive models [11, 12]. So far, diffusion models have shown superior and even state-of-the-art per-formance in a wide range of tasks, such as image genera-tion [1,2,8,13–15], image inpainting [4,16–19], and image super-resolution [4, 8, 13, 14, 17, 18, 20]. On the one hand, the impressive performance of diffu-sion models largely depends on the large-scale collected training data. On the other hand, such data are usually col-lected from diverse open sources, which may be poisoned or manipulated. One typical threat is Trojan attacks [21–26], which have exhibited threatening attack performance on im-age classification models. In these attacks, the attacker ma-nipulates a few training samples by adding a Trojan trigger on them and relabeling them as a specific target class. Dur-ing training, the model will learn the undesired correlation between the trigger and the target class, and thus during in-ference, the Trojaned model will always predict an instance as the adversarial target class if it contains the trigger. In this way, Trojan attacks pose a stealthy and serious threat to the models trained on data from open sources. Thus, a natural question arises: Can diffusion models be Trojaned? To explore the vulnerability of diffusion models against Trojan attacks, in this work, we propose the first Trojan at-tack on diffusion models, named TrojDiff. Particularly, we study two generic diffusion models, i.e., DDPM [1] and DDIM [2]. The pipeline of TrojDiff is illustrated in the second row of Figure 1. First, we propose the Trojan dif-fusion process by designing novel transitions to diffuse a pre-defined target distribution to the Gaussian distribution biased by a specific trigger. Then, we apply a new parame-terization of the generative process which learns to reverse This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4035 …(3) Benign sampling(2) Benign trainingq(x!"#|x!)x$~𝒩(0,I)x%~q(x%)≈… x!x!"#p&(x!"#|x!) x!x!"#p&(x!"#|x!) …(1) Benign diffusion processx$~𝒩(0,I)x%~q(x%)… x!x!"# q(x!|x!"#)…(3) Trojansampling(2) Trojan training≈…x!x!"#x!x!"#…(1) Trojan diffusion process…x!x!"#,q(x!"#|x!),p&(x!"#|x!)x%~,q(x%)(a) In-D2D attack(b) Out-D2D attack(c) D2I attack x%~,q(x%)(A) Attack with blend-based triggerx$~𝒩(µ,𝛾'I) x$~𝒩(µ,𝛾'I) (3) Trojan sampling(B) Attack with patch-based trigger x$~𝒩(µ,𝛾'I),q(x!|x!"#),p&(x!"#|x!)x$~𝒩(µ,𝛾'I)x%~,q(x%)++𝛿 𝛿𝜇=(1−𝛾)𝛿 𝜇=(1−𝛾)𝛿𝜇0 𝜇0x$~𝒩(0,I) x$~𝒩(0,I)…x!"#…x!,p&(x!"#|x!) x%~,q(x%)x%~,q(x%)…x!"#…x!,p&(x!"#|x!)…x!"#…x!,p&(x!"#|x!) Figure 1. Framework of TrojDiff. First row : Benign procedures of DDPM [1]. Second row : Trojan procedures proposed in TrojDiff. Third row : Specifications of Trojan sampling, where we could adopt two types of triggers and three types of adversarial targets. Note that by replacing q(p,˜q,˜p)withqI(pI,˜qI,˜pI), the attack procedures are generalized to DDIM [2]. the Trojan diffusion process via an effective training objec-tive. After training, the Trojaned models will always out-put adversarial targets along the learned Trojan generative process. In particular, as shown in the third row of 1, we consider both the blend-based trigger and the patch-based trigger to generate different adversarial shifts on the stan-dard Gaussian distribution. We consider three types of ad-versarial targets based on different attack goals, and the Tro-janed diffusion model can output 1) instances belonging to the adversarial class (target) from the in-domain distribu-tion in In-D2D attack , 2) an out-of-domain distribution in Out-D2D attack , and 3) a specific instance in D2I attack . Empirically, TrojDiff achieves high attack performance against DDPM and DDIM on CIFAR-10 and CelebA datasets based on three adversarial targets and two types of triggers. For instance, on CelebA dataset, TrojDiff could reach the attack precision andattack success rate of up to 84.70% and 96.90% in In-D2D attack. Moreover, the attack success rate is always higher than 98% in Out-D2D attack and the mean square error is as low as 1×10−4level in D2I attack. Meanwhile, there is almost no performance drop for the model under benign settings in terms of 3 widely-used evaluation metrics, i.e.,FID,precision , and recall . Our main contributions are threefold. (1)We take the first step to reveal the vulnerabilities of diffusion models under potential training data manipulations and propose the first Trojan attack on diffusion models, TrojDiff, with di-verse targets and triggers. (2)We propose the Trojan diffu-sion process with novel transitions to diffuse adversarial tar-gets into a biased Gaussian distribution and the Trojan gen-erative process based on a new parameterization that leads to a simple training objective for the Trojan attack. (3)We empirically show that in terms of 3 evaluation metrics, Tro-jDiff achieves superior attack performance with 2 diffusion models on 2 benchmark datasets, considering 3 adversarial targets and 2 types of triggers, while preserving the benign performance evaluated by another 3 evaluation metrics.
Chen_End-to-End_3D_Dense_Captioning_With_Vote2Cap-DETR_CVPR_2023
Abstract 3D dense captioning aims to generate multiple cap-tions localized with their associated object regions. Exist-ing methods follow a sophisticated “detect-then-describe” pipeline equipped with numerous hand-crafted components.However , these hand-crafted components would yield sub-optimal performance given cluttered object spatial andclass distributions among different scenes. In this pa-per , we propose a simple-yet-effective transformer frame-work V ote2Cap-DETR based on recent popular DEtection TRansformer (DETR). Compared with prior arts, ourframework has several appealing advantages: 1) With-out resorting to numerous hand-crafted components, ourmethod is based on a full transformer encoder-decoder ar-chitecture with a learnable vote query driven object de-coder , and a caption decoder that produces the dense cap-tions in a set-prediction manner . 2) In contrast to the two-stage scheme, our method can perform detection and cap-tioning in one-stage. 3) Without bells and whistles, exten-sive experiments on two commonly used datasets, ScanRe-fer and Nr3D, demonstrate that our V ote2Cap-DETR sur-passes current state-of-the-arts by 11.13% and 7.11% [email protected], respectively. Codes will be released soon.
1. Introduction In recent years, works on 3D learning has grown dramat-ically for various applications [10, 11,21, 41, 42]. Among them, 3D dense captioning [7, 13] requires a system to lo-calize all the objects in a 3D scene and generate descrip-tive sentences for each object. This problem is challenging,given 1) the sparsity of point clouds and 2) the cluttereddistribution of objects. 3D dense captioning can be divided into two tasks, object detection, and object caption generation. Scan2Cap [13],MORE [20], and SpaCap3D [39] propose well-designed re-*Part of this work was accomplished under supervison by Dr. Hongyuan Zhu from A*STAR, Singapore. †Corresponding author.Two-stage methods Feature Encoding This is a gray chair. It is to the right of a file cabinet.The rolling office chair. The chair is next to the square desk. ⋮ Object DetectionRelation ModellingCaption GenerationPost Processing Vote2Cap-DETR Caption HeadDetection HeadTransformer Feature EncodingDecoder Vote QueryThis is a gray chair. It is to the right of a file cabinet.The rolling office chair. The chair is next to the square desk. ⋮ Figure 1. Illustration of existing two-stage 3D dense captioning method (upper) and our Vote2Cap-DETR (bottom). Existing methods adopt a two-stage pipeline that heavily depends on a de-tector’s output. Therefore, we propose a transformer-based one-stage model, V ote2Cap-DETR, that frames 3D dense captioningas a set prediction problem. lation reasoning modules to model relations among object proposals efficiently. [48] introduces contextual information from two branches to improve the caption. 3DJCG [4] and D3Net [7] study the correlation between 3D visual ground-ing and 3D dense captioning and point out that these twotasks promote each other. Additionally, χ-Trans2Cap [43] discusses how to transfer knowledge from additional 2d in-formation to boost 3d dense captioning. Among existing methods, they all adopt a two-stage “detect-then-describe” pipeline [4, 7,13, 20, 39, 48] (Fig-ure 1). This pipeline first generates a set of object pro-posals, then decodes each object by a caption generator with an explicit reasoning procedure. Though these meth-ods have achieved remarkable performance, the “detect-then-describe” pipeline suffers from the following issues:1) Because of the serial and explicit reasoning, the cap-tioning performance highly depends on the object detection performance, which limits the mutual promotion of detec-tion and captioning. 2) The heavy reliance on hand-craftedcomponents, e.g., radii, 3D operators, the definition of pro-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11124 posal neighbors, and post-processing (non-maximum sup-pression [ 28]) introduces additional hyper-parameters, lead-ing to a sub-optimal performance given the sparse objectsurfaces and cluttered object distributions among differentindoor scenes. This inspires us to design a one-stage 3D dense captioning system. To address the above issues, we propose V ote2Cap-DETR, a full transformer encoder-decoder architecture forone-stage 3D dense captioning. Unlike traditional “detect-then-describe” pipelines, we directly feed the decoder’s out-put into the localization head and caption head in paral-lel. By casting 3D dense captioning as a set-to-set problem,each target instance and its language annotation is matchedwith a query in a one-to-one correspondence manner, en-abling a more discriminative feature representation for pro-posals to identify each distinctive object in a 3D scene. Ad-ditionally, we also propose a novel vote query driven de-coder to introduce spatial bias for better localization of ob-jects in a cluttered 3D scene. With fully attentional design, we resolve 3D dense cap-tioning with the following innovations: 1) Our methodtreats the 3D dense captioning task as a set prediction prob-lem. The proposed V ote2Cap-DETR directly decodes the features into object sets with their locations and correspond-ing captions by applying two parallel prediction heads. 2)We propose a novel vote decoder by reformulating the ob-ject queries in 3DETR into the format of the vote query, which is a composition of the embeddings of the seeds point and the vote transformation with respect to the seeds.This indicates the connection between the vote query inV ote2Cap-DETR with the V oteNet, but with better local-ization and higher training efficiencies; 3) We develop anovel query driven caption head, which absorbs the rela-tion and attribute modeling into self-and cross-attention, so that it can look into both local and global contexts for betterscene description. Extensive experiments on two commonlyused datasets, ScanRefer and Nr3D, demonstrate that ourapproach surpasses prior arts with many hand-crafted pro-cedures by a large margin, which demonstrates the superi-ority that fully transformer architecture with sophisticatedvote head and caption head can inspire many 3D vision andlanguage tasks. To summarize, the main contributions of this work in-clude: • We propose a novel one-stage and fully attention driven architecture for 3D dense captioning as a set-to-set prediction problem, which achieves object local-ization and caption generation in parallel. • Extensive experiments show that our proposed V ote2Cap approach achieves a new state-of-the-artperformance on both Nr3D [ 1] (45.53% [email protected]) and ScanRefer [ 13] (73.77% [email protected]).2. Related Work We briefly summarize works on 3D and video dense cap-tioning, and DETR-based methods for images and 3D pointclouds. Additionally, we also introduce some methods forimage captioning, which are closely related to our work. 3D and Video Dense Captioning. 3D dense captioning, a task that requires translating 3D scene information to aset of bounding boxes and natural language descriptions,is challenging and has raised great interest among schol-ars recent years. Scan2Cap [ 13] and MORE [ 20] build graph on a detector’s [ 19,32] box estimations with hand-crafted rules for complex relation reasoning among objectsin a 3D scene. SpaCap3D [ 39] build a spatiality-guided transformer to model spatial relations among the detector’soutput. 3DJCG [ 4] and D3Net [ 7] study the joint pro-motion of 3D dense captioning and 3D visual grounding. χ-Trans2Cap [ 43] introduces additional 2D prior to com-plement information for 3D dense captioning with knowl-edge transfer. Recently, [ 48] shifts attention to contextual information for the perception of non-object information.Though these approaches have made great attempts at 3D dense captioning, they all follow a “detect-then-describe” pipeline, which heavily depends on a detector’s perfor-mance. Our proposed V ote2Cap-DETR differs from exist-ing works in that our method is a one-stage model that de-tects and generates captions in parallel and treats 3D densecaptioning as a set prediction problem. Video dense cap-tioning requires a model to segment and describe video clipsfrom an input video. [ 40,49] propose transformer architec-ture for end-to-end video dense captioning. In this paper,we design elements specially for 3D dense captioning, suchas vote queries for better localization in sparse 3D space and the utilization of local contextual information through crossattention for informative object description. DETR: from 2D to 3D. DE tection Transformer(DETR) [ 5] is a transformer [ 37] based architecture that treats object de-tection as a set prediction problem and does not require non-maximum suppression [ 28] for post-processing. Though great results have been achieved, DETR suffers from slowconvergence. Many follow-up works [ 9,16,18,26,44,50] put efforts on speeding up DETR’s training by introduc-ing multi-scale features, cross attention designs, and la-bel assignment techniques. Researchers also attempt to in-troduce transformer architectures to 3D object detection.GroupFree3D [ 24] learns proposal features from the whole point cloud through the transformer rather than grouping local points. 3DETR [ 27] analyzes the potential of the standard transformer model and generates proposals by uni-formly sampling seed points from a 3D scene. In our work,we extend the DETR architecture for 3D dense captioningthat makes caption generation and box localization fully in-terrelated with parallel decoding. Additionally, we propose 11125 vote query for better performance and faster convergence. Image Captioning. Image captioning requires a model to generate sentences describing key elements in an image,which has become a hot topic in computer vision. Exist-ing image captioning works adopt an encoder-decoder ar-chitecture, where the decoder generates sentences from vi-sual features extracted by the encoder. [ 2,14,17,30] adopt a detector to extract region features as visual clues for thedecoder, while [ 23,46] extract grid features directly from an image. Additionally, [ 29] generates captions from both re-gion and grid visual features. Though these methods are ef-fective in image captioning, they cannot be directly appliedto 3D dense captioning since it requires describing
Ding_Mitigating_Task_Interference_in_Multi-Task_Learning_via_Explicit_Task_Routing_CVPR_2023
Abstract Multi-task learning (MTL) seeks to learn a single model to accomplish multiple tasks by leveraging shared infor-mation among the tasks. Existing MTL models, however, have been known to suffer from negative interference among tasks. Efforts to mitigate task interference have focused on either loss/gradient balancing or implicit parameter par-titioning with partial overlaps among the tasks. In this paper, we propose ETR-NLP to mitigate task interference through a synergistic combination of non-learnable primi-tives (NLPs) and explicit task routing (ETR). Our key idea is to employ non-learnable primitives to extract a diverse set of task-agnostic features and recombine them into a shared branch common to all tasks and explicit task-specific branches reserved for each task. The non-learnable prim-itives and the explicit decoupling of learnable parame-ters into shared and task-specific ones afford the flexibility needed for minimizing task interference. We evaluate the efficacy of ETR-NLP networks for both image-level clas-sification and pixel-level dense prediction MTL problems. Experimental results indicate that ETR-NLP significantly outperforms state-of-the-art baselines with fewer learnable parameters and similar FLOPs across all datasets. Code is available at this URL.
1. Introduction Multi-task learning (MTL) is commonly employed to improve learning efficiency and performance of multiple tasks by using supervised signals from other related tasks [6, 25, 37]. These models have led to impressive results across numerous tasks. However, there is well-documented evidence [14,21,32,39] that these models are suffering from task interference [39], thereby limiting multi-task networks (MTNs) from realizing their full potential. *Work done as a visiting scholar at Michigan State University. †Corresponding author 0 10 20 30 40 Epochs455055606570% CelebA F-scoreStandard learnable conv Non-Learnable Primitives (NLPs) Explicit Task Routing (ETR) ETR-NLPStandard learnable conv Non-Learnable Primitives (NLPs) Explicit Task Routing (ETR) ETR-NLP(a)ResNet18 (b)Layer 1, 4, and 8 of ResNet18 (from left to right) Figure 1. (a) Learning progression of multi-task networks (MTNs) on CelebA for eight tasks. Hard-sharing models with fully learn-able parameters (gray) learn rapidly and then suffer from perfor-mance degradation due to conflicting gradients from task interfer-ence. Networks with non-learnable primitives (NLPs; blue) do not suffer from task interference by design, while explicit task rout-ing (ETR; green), and ETR with NLPs (red) do not eliminate but suffer less from task interference. (b) Gradient correlations mea-sured via CKA [15] across all pairs of tasks for different layers of a standard MTN at the end of training. Observe the acute lack of correlation between tasks (low off-diagonal magnitude). For instance, consider the learning progression of an MTN with a standard learnable convolutional layer in Fig-ure 1a (blue curve). Observe that the model learns rapidly, we posit, by exploiting all the shared information between the tasks, i.e., gradients pointing in similar directions. How-ever, the performance starts degrading on further training since the model needs to exploit dissimilar information be-tween the tasks for further improvement, i.e., gradients point in different directions. The latter can be verified by observing the similarity (centered kernel alignment [15]), or the lack thereof, between the gradients for each pair of tasks in Figure 1b. Several approaches were proposed for mitigating task in-terference in MTNs, including loss/gradient balancing [13, 17, 18, 26, 38], parameter partitioning [2, 21, 23, 29] and ar-chitectural design [7, 14, 22]. Despite the diversity of these approaches, they share two common characteristics, (i) all parameters are learned, either for a pre-trained task or for the multiple tasks at hand, (ii) the learned parameters are either fully shared across all tasks or are shared across a par-tial set of tasks through implicit partitioning, i.e., with no di-rect control over which parameters are shared across which tasks. Both of these features limit the flexibility of existing This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7756 multi-task network designs from mitigating the deleterious effects of task interference on their predictive performance. Relaxing the above design choices is the primary goal of this paper. We propose two complementary design princi-ples, namely explicit task routing (ETR) , and non-learnable primitives (NLPs) , that explicitly seek to mitigate task in-terference. Through extensive empirical evaluation, we demonstrate that these two complementary ideas, individ-ually and jointly, help mitigate task interference and con-sistently improve the performance of MTNs. As can be observed in Figure 1a, compared to a hard-sharing MTN with a standard learnable convolutional layer (gray curve), an MTN with NLP (blue curve) has better learning charac-teristics, i.e., learn more steadily and not suffer from per-formance degradation. Similarly, MTN with ETR (green curve) and MTN with ETR-NLP (red curve) does not elim-inate task interference but reduce it to an extent. Figure 2 shows an overview of the proposed ETR-NLP networks. From a network topological perspective, we propose ex-plicit task-routing (ETR), a parameter allocation strategy that partitions the parameters into shared andtask-specific branches. More explicitly, it comprises one branch shared across all tasks and task-specific branches, one for each task. Unlike existing parameter partitioning methods, ETR is designed to offer precise and fine-grained control over which andhow many parameters are shared ornot shared across the tasks. Additionally, ETR is flexible enough to allow existing implicit parameter partitioning methods [23, 29] to be incorporated into its shared branch. From a layer design perspective, we propose using non-learnable primitives (NLPs) to extract task-agnostic features and allow each task to choose optimal combinations of these features adaptively. There is growing evidence that features extracted from NLPs can be very effective for single-task settings, including for image classification [12, 24, 34–36], reinforcement learning [8] and modeling dynamical sys-tems [20]. NLPs are attractive for mitigating task interfer-ence in MTL. Since they do not contain learnable param-eters, the task-agnostic features extracted from such layers alleviate the impact of conflicting gradients, thus implic-itly addressing task interference. However, the utility and design of NLPs for multi-task networks have not been ex-plored. We summarize our key contributions below: –We introduce the concept of non-learnable primitives (NLPs) andexplicit task routing (ETR) to mitigate task in-terference in multi-task learning. We systematically study the effect of different design choices to determine the opti-mal design of ETR and NLP. –We demonstrate the effectiveness of ETR and NLP through MTNs constructed with only NLPs (MTN-NLPs) and only ETR (MTN-ETR) for both image-level classifica-tion and pixel-level dense prediction tasks. –We evaluate the effectiveness of ETR-NLP networks Input ImageETR -NLP module Explicit Task RoutingNon-Learnable Primitives × 𝑚 Task 1: Depth Estimation Task 2: Semantic Segmentation SharedFeatures Task -specific Features PredictionsFigure 2. ETR-NLP Networks comprise non-learnable primitives to extract diverse task-agnostic features, followed by explicit task routing to control the parameters/features that are shared across all tasks and those that are exclusive to every single task. across three different datasets and compare them against a wide range of baselines for both image-level classification and pixel-level dense prediction tasks. Results indicate that ETR-NLP networks consistently improve performance by a significant amount.
Ghosh_Learned_Two-Plane_Perspective_Prior_Based_Image_Resampling_for_Efficient_Object_CVPR_2023
Abstract Real-time efficient perception is critical for autonomous navigation and city scale sensing. Orthogonal to archi-tectural improvements, streaming perception approaches have exploited adaptive sampling improving real-time de-tection performance. In this work, we propose a learnable geometry-guided prior that incorporates rough geometry of the 3D scene (a ground plane and a plane above) to re-sample images for efficient object detection. This signifi-cantly improves small and far-away object detection per-formance while also being more efficient both in terms of latency and memory. For autonomous navigation, using the same detector and scale, our approach improves detection rate by +4.1APSor+39% and in real-time performance by+5.3sAP Sor+63% for small objects over state-of-the-art (SOTA). For fixed traffic cameras, our approach detects small objects at image scales other methods cannot. At the same scale, our approach improves detection of small ob-jects by 195% (+12.5 APS) over naive-downsampling and 63% (+4.2APS) over SOTA.
1. Introduction Visual perception is important for autonomous driving and decision-making for smarter and sustainable cities. Real-time efficient perception is critical to accelerate these ad-vances. For instance, a single traffic camera captures half a million frames every day or a commuter bus acting as a city sensor captures one million frames every day to mon-itor road conditions [10] or to inform public services [22]. There are thousands of traffic cameras [24] and nearly a mil-lion commuter buses [53] in the United States. It is infeasi-ble to transmit and process visual data on the cloud, leading to the rise of edge architectures [43]. However, edge de-vices are severely resource constrained and real-time infer-ence requires down-sampling images to fit both latency and memory constraints severely impacting accuracy. On the other hand, humans take visual shortcuts [19] to (c)(d)(e) (a) Original(b) Warped (f) May Need AssistanceEmpty Trashcan(g)Traffic CamerasSelfDrivingCity SensingFigure 1. Geometric cues (black dashed lines) are implicitly present in scenes. Our Perspective based prior exploits this ge-ometry. Our method (a) takes an image and (b) warps them, and performs detection on warped images. Small objects which are (d) not detected when naively downsampled but (e) are detected when enlarged with our geometric prior. Our method (f) uses a geomet-ric model to construct a saliency prior to focus on relevant areas and (g) enables sensing on resource-constrained edge devices. recognize objects efficiently and employ high-level seman-tics [19, 52] rooted in scene geometry to focus on relevant parts. Consider the scene in Figure 1 (c), humans can rec-ognize the distant car despite its small appearance (Figure 1 (d)). We are able to contextualize the car in the 3D scene, namely (1) it’s on the road and (2) is of the right size we’d expect at that distance. Inspired by these observations, can we incorporate semantic priors about scene geometry in our neural networks to improve detection? This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13364 In this work, we develop an approach that enables object de-tectors to “zoom” into relevant image regions (Figure 1 (d) and (e)) guided by the geometry of the scene. Our approach considers that most objects of interests are present within two planar regions, either on the ground plane or within another plane above the ground, and their size in the im-age follow a geometric relationship. Instead of uniformly downsampling, we sample the image to enlarge far away regions more and detect those smaller objects. While methods like quantization [17], pruning [18], dis-tillation [6] and runtime-optimization [16] improve model efficiency (and are complementary), approaches exploiting spatial and temporal sampling are key for enabling effi-cient real-time perception [21, 27]. Neural warping mecha-nisms [23, 36] have been employed for image classification and regression, and recently, detection for self-driving [50]. Prior work [50] observes that end-to-end trained saliency networks fail for object detection. They instead turn to heuristics such as dataset-wide priors and object locations from previous frames, which are suboptimal. We show that formulation of learnable geometric priors is critical for learning end-to-end trained saliency networks for detection. We validate our approach in a variety of scenarios to show-case the generalizability of geometric priors for detection in self-driving on Argoverse-HD [27] and BDD100K [57] datasets, and for traffic-cameras on WALT [37] dataset. • On Argoverse-HD, our learned geometric prior im-proves performance over naive downsampling by +6.6 APand+2.7APover SOTA using the same detection architecture. Gains from our approach are achieved by detecting small far-away objects, improving by 9.6 APS(or195% ) over naive down-sampling and 4.2 APS(or63%) over SOTA. • On WALT, our method detects small objects at image scales where other methods perform poorly. Further, it significantly improves detection rates by 10.7APS over naive down-sampling and 3APSover SOTA. • Our approach improves object tracking (+ 4.8% MOTA) compared to baseline. It also improves track-ing quality, showing increase of +7.6% MT%and re-duction of -6.7% ML%. • Our approach can be deployed in resource constrained edge devices like Jetson AGX to detect 42% more rare instances while being 2.2X faster to enable real-time sensing from buses.
Fu_Tell_Me_What_Happened_Unifying_Text-Guided_Video_Completion_via_Multimodal_CVPR_2023
Abstract Generating a video given the first several static frames is challenging as it anticipates reasonable future frames with temporal coherence. Besides video prediction, the ability to rewind from the last frame or infilling between the head and tail is also crucial, but they have rarely been explored for video completion. Since there could be different out-comes from the hints of just a few frames, a system that can follow natural language to perform video completion may significantly improve controllability. Inspired by this, we introduce a novel task, text-guided video completion ( TVC), which requests the model to generate a video from partial frames guided by an instruction. We then propose Multi-modal Masked Video Generation (MMVG) to address this TVC task. During training, MMVG discretizes the video frames into visual tokens and masks most of them to perform video completion from any time point. At inference time, a single MMVG model can address all 3 cases of TVC, in-cluding video prediction, rewind, and infilling, by applying corresponding masking conditions. We evaluate MMVG in various video scenarios, including egocentric, animation, and gaming. Extensive experimental results indicate that MMVG is effective in generating high-quality visual ap-pearances with text guidance for TVC.
1. Introduction Generative video modeling [15, 70, 84] has made great progress, which first succeeds in unconditional video gen-eration [40,64]. More recently, video prediction [28,36,47] has been trying the controllable setting, which anticipates the future by completing a video from the past frames or a static starting image [37, 97]. However, video prediction may produce various outcomes, which makes it difficult to meet human expectations. For the example in Fig. 1(a), the game agent can keep jumping to the right or move back and turn left. The limited guidance from only the first frame is Figure 1. The introduced text-guided video completion ( TVC) task. (a) Video prediction may have different outcomes without text guidance. (b) TVC performs video completion from the first frame ( prediction ), the last frame ( rewind ), or both ( infilling ), guided by the textual description. insufficient to tell the intention. For humans, language is the most straightforward way of communication. If a system can follow an instruction to accomplish video completion, it will significantly improve its controllability and make a vast application impact. On the other hand, compared with video prediction, video rewind and infilling have been rarely stud-ied [39, 79], but they are also crucial. Breaking the limita-tion of chronological guidance should make the visual guid-ance more flexible, leading to a general video completion. We thus introduce a novel task, text-guided video com-pletion ( TVC), where the partial frames and a given instruc-tion jointly guide the video generation. As illustrated in This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10681 Fig. 1(b), we consider three scenarios of video completion: prediction from the first frame, rewind from the last frame, andinfilling between the head and tail. The missing (to-be-completed) event should follow the textual instruction. Compared to generating content from scratch [43, 87], TVC requests models to understand the given visual and textual guidance before generation, which better mimics how hu-man imagines after seeing and listening in our daily lives. To tackle TVC, we present Multimodal Masked Video Generation (MMVG) to perform video completion. Specif-ically, we represent the video frames as discrete visual to-kens by temporal-aware VQGAN [54, 76]. One key chal-lenge is to deal with the video frames that are not presented in chronological ( e.g., the last frame for rewind). Different from autoregressive models [23, 86] that only condition on the previous frames, MMVG carries out video completion in an encoder-decoder manner. Specifically, we propose a masking strategy that masks different parts of the video and feeds them as the input to the multimodal encoder with the instruction. As shown in Fig. 2, we allow MMVG to con-sider the visual hints from different time points, and the de-coder learns to produce the full target video. By varying the masking conditions (including the cases of only the first or last frame being accessible), a single MMVG can address allTVC tasks, including video prediction, rewind, and in-filling. Moreover, learning the recovery from partial frames also empowers MMVG with a strong temporal coherence, contributing to better generative video quality. We consider videos in diverse scenarios for the TVC eval-uation. There are Kitchen [13], Flintstones [26], and MU-GEN [29] corresponding to the egocentric, animation, and gaming scenes. The model should generate videos such as performing kitchen activities in the first-person view, mak-ing characters act the assigned behavior, or imitating an agent playing game. All should be guided with the first/last (or both) frame(s) and controlled through the given human instructions. We also compare MMVG with previous meth-ods [23, 57, 94, 96] on UCF-101 [69] and BAIR [18] for the classic video generation/prediction tasks. Experimental results demonstrate that instruction is nec-essary to make video completion controllable, MMVG can address all three TVC tasks, and our proposed masking strat-egy enhances the temporal modeling, which further benefits general video generation/prediction. In summary, our con-tributions are three-fold: • We introduce TVC to generate a video from partial frames and control the temporal dynamics via natural language, where our video completion includes 3 cases: prediction, rewind, and infilling. • We propose MMVG with an effecitve masking strategy to address all TVC tasks through a single training. • Extensive experiments show that our MMVG can handle various types of video completion as well as video gener-ation/prediction. We believe TVC can become a new topic in vision-and-language research.
Huang_Neural_Kernel_Surface_Reconstruction_CVPR_2023
Abstract We present a novel method for reconstructing a 3D im-plicit surface from a large-scale, sparse, and noisy point cloud. Our approach builds upon the recently introduced Neural Kernel Fields (NKF) [ 58] representation. It enjoys similar generalization capabilities to NKF , while simulta-neously addressing its main limitations: (a) We can scale to large scenes through compactly supported kernel func-tions, which enable the use of memory-efficient sparse lin-ear solvers. (b) We are robust to noise, through a gradi-ent fitting solve. (c) We minimize training requirements, enabling us to learn from any dataset of dense oriented points, and even mix training data consisting of objects and scenes at different scales. Our method is capable of recon-structing millions of points in a few seconds, and handling very large scenes in an out-of-core fashion. We achieve state-of-the-art results on reconstruction benchmarks con-sisting of single objects (ShapeNet [ 5], ABC [ 33]), indoor scenes (ScanNet [ 11], Matterport3D [ 4]), and outdoor scenes (CARLA [ 16], Waymo [ 49]).
1. Introduction The goal of 3D reconstruction is to recover geometry from partial measurements of a shape. In this work, we aimto map a sparse set of oriented points sampled from the sur-face of a shape to a 3D implicit field. This is a challenging inverse problem since point clouds acquired from real-world sensors are often very large (millions or billions of points), vary in sampling density, and are corrupted with sensor noise. Furthermore, since surfaces are continuous but points are discrete, there are many valid solutions which can explain a given input. To address these issues, past approaches aim to recover surfaces that agree with the input points while satisfying some prior everywhere else in space. Classical methods use an explicit prior (e.g. smoothness), while more recent learning-based approaches promote a likely recon-struction under a data-driven prior. There are, however, key limitations to both types of tech-niques that inhibit their application in practical situations. Since classical methods are fast, scalable, and able to han-dle diverse inputs, they have become an industry standard (e.g. [ 32,61]). Yet, they suffer from quality issues in the presence of high noise or sparse inputs, often failing to re-construct even simple geometry such as a ground plane (see the ground in Fig.1). On the other hand, learning-based ap-proaches were shown to handle large noise [ 42], and sparse inputs [ 39,3], yet they often struggle to generalize to out-of-distribution shapes and sampling densities as was highlighted in [58]. These generalization issues can be attributed to the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4369 fact that current learning-based methods struggle to exploit large and diverse amounts of data for training. One cause of this is that a single forward pass can take minutes for even moderately sized inputs ( e.g.[3]), limiting training to collections consisting of small point clouds. Furthermore, many existing methods rely on a preprocessing step to ex-tract supervision in the form of occupancy or signed distance function [ 43,38,40,3,58]. In practice, this preprocessing step hinders the ability to easily use diverse datasets for training since most shape datasets (including synthetic ones such as the popular ShapeNet [ 5]) consist of non-watertight shapes, open surfaces, or contain ghost geometry from which extracting supervision is hard. Recently, [ 58] proposed Neural Kernel Fields (NKF), a new paradigm to address the problem of generalization in 3D reconstruction. NKF learns a data-dependent kernel, and pre-dicts a continuous occupancy field as a linear combination of this kernel supported on the input points. The key insights of NKF are that a kernel explicitly encodes inductive bias, and that solving a kernel linear interpolation problem at test time always produces solutions that adhere to the inputs. Thus, by training on diverse shapes, NKF can learn a good inductive bias for the general 3D reconstruction problem rather than for a specific dataset. While NKF achieves impressive gen-eralization results, it suffers from two major limitations that restrict its practical application. First, since it uses a globally supported kernel, it requires solving a dense linear system and cannot reconstruct inputs with more than ten thousand input points. Second, it degrades poorly in the presence of noise due to its interpolation of exact positional occupancy constraints. In this work, we build upon the excellent generalization capability of NKF and tackle its main limitations to achieve apractical learning-based reconstruction method that is scal-able, fast, and robust to noise. Like NKF, our work leverages the idea of a learned kernel for generalization, but we (1) develop a novel, gradient-based kernel formulation which is robust to noise, and (2) use an explicit voxel hierarchy struc-ture and compactly supported kernels to make our interpo-lation problem sparse, multi-scale, and capable of handling large inputs while still producing high fidelity outputs. The result is a learning-based yet out-of-the-box reconstruction method that can be applied to point clouds in the wild. In particular, it enjoys all of the following properties: •It can generalize to out-of-distribution inputs, produc-ing high-fidelity reconstructions, even in the presence of sparsity and noise. •It can be trained on the union of diverse datasets while only requiring dense oriented points as supervision, unlocking a new level of training data scale. •It can reconstruct point clouds consisting of millions of points in seconds, and scale to extremely large inputs in an out-of-core fashion. General Applicability Data-Prior ScalabilityOurs[2] [3, 9] [4, 7][5] [1] [6, 8][1]: SPSR [ 32] [2]: N-Splines [ 61] [3]: OccNet [ 38] [4]: NGLOD [ 51] [5]: NKF [ 58] [6]: ConvONet [ 43] [7]: TSDF-Fusion [ 10] [8]: POCO [ 3] [9]: DMTet [ 47] Figure 2: Comparison to related works. We illustrate other methods in the context of these points visually in Fig. 2.
Guillaro_TruFor_Leveraging_All-Round_Clues_for_Trustworthy_Image_Forgery_Detection_and_CVPR_2023
Abstract In this paper we present TruFor, a forensic framework that can be applied to a large variety of image manipula-tion methods, from classic cheapfakes to more recent ma-nipulations based on deep learning. We rely on the ex-traction of both high-level and low-level traces through a transformer-based fusion architecture that combines the RGB image and a learned noise-sensitive fingerprint. The latter learns to embed the artifacts related to the cam-era internal and external processing by training only on real data in a self-supervised manner. Forgeries are de-tected as deviations from the expected regular pattern that characterizes each pristine image. Looking for anomalies makes the approach able to robustly detect a variety of lo-cal manipulations, ensuring generalization. In addition to a pixel-level localization map and a whole-image integrity score, our approach outputs a reliability map that high-lights areas where localization predictions may be error-prone. This is particularly important in forensic applica-tions in order to reduce false alarms and allow for a large scale analysis. Extensive experiments on several datasets show that our method is able to reliably detect and local-ize both cheapfakes and deepfakes manipulations outper-forming state-of-the-art works. Code is publicly available athttps://grip-unina.github.io/TruFor/ .
1. Introduction Manipulating images has never been easier, with new powerful editing tools appearing by the day. These new opportunities stimulate the creativity of benign and mali-cious users alike. Previously, crafting a multimedia disin-formation campaign required sophisticated skills, and at-tackers could do little more than copy, replicate or remove objects in an image, classic forms of image manipulations also known as “cheapfakes”. With the explosive growth of deep learning, image manipulation tools have become both easier to use and more powerful, allowing users to generate on-the-fly images of persons that do not exist or to realize Noiseprint ++ Extraction Confidence Analysis Forgery Detection Integrity ScoreAnomaly Analysis Figure 1. TruFor detects and localizes image forgeries (in yellow). It is based on the extraction of a learned noise-sensitive fingerprint, Noiseprint++, which is combined with the RGB image to output an anomaly localization map. Noiseprint++ is also used jointly with the image to compute the confidence map, which estimates the less reliable regions of the anomaly heatmap (black areas), e.g. the false positive region in lower right. The confidence and anomaly maps are then used together to produce a global integrity score. credible deepfakes. Diffusion models enable the creation of realistic image edits using natural language prompts, photo-realistically adapting the inserted manipulation to the style and lighting of the context [1, 33]. The risks posed by such tools in the wrong hands are obvious. Indeed, in recent years there has been a grow-ing interest on the part of governments and funding agen-cies in developing forensic tools capable of countering such attacks. A major focus is on local image edits, particu-larly partial modifications that change the image seman-tics (for example the partially manipulated image in Fig. 1, where the two real faces have been replaced with GAN-generated ones [26]). Multimedia forensics and related sci-entific fields have seen a rapid increase in activity in re-sponse to such challenges, with a large number of methods and tools proposed for image forgery detection and localiza-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20606 tion [38]. Despite considerable advances in the area, current SOTA detectors are not yet performant enough for in-the-wild deployment, due mainly to deficiencies in several ar-eas subject to intense research: i)limited generalization; ii) limited robustness; iii)insufficient detection performance. Limited generalization is the inability of detectors to cope with out-of-distribution manipulations. Some detec-tors are built to exploit well-defined low-level features, e.g., traces of JPEG compression, demosaicking or interpola-tion [2, 6, 34], while others are typically developed to work well only on specific types of manipulations, like splicing [25, 37]. In addition, in a realistic scenario images also un-dergo numerous forms of non-malicious degradation, (e.g. recompression, resizing, etc) -also called laundering . For example, social networks compress and resize uploaded im-ages, both of which can easily remove forensic traces. Fi-nally, most SOTA methods perform image forgery localiza-tion, leaving detection as an afterthought [11], which is typ-ically derived as a global integrity score from the localiza-tion heatmap itself [22, 36, 42]. Few methods address the detection task directly [8, 31, 39, 46]. As a result, detection accuracy is poor, with a high false alarm rate. In a realis-tic setting where manipulated images are rare, such perfor-mance could cause more problems than it solves, with false positives drastically outnumbering true positives. This work addresses such shortcomings, with a focus on robust detection under varied manipulations. Our aim is to first establish whether the image under analysis has been manipulated or not, and subsequently consider forgery lo-calization only for images where a forgery has been de-tected. To perform in a real-world scenario where im-ages undergo many post-processing steps that may atten-uate forensic traces, our design was guided by the need to leverage information at multiple scales (both low and high-level features) even in complex scenarios. Our framework estimates a confidence map that associates localization re-sults with region-specific uncertainty, allowing many poten-tial false alarms to be rejected. The block diagram of our method is presented in Fig. 1. Overall, in this work we make the following key contributions: • we propose a new framework, TruFor, which outputs a global integrity score, an anomaly-based localization map and an associated confidence map; • we propose a new noise-sensitive fingerprint, Noiseprint++, with enhanced robustness to image laundering; • we combine low-level and high-level evidence to per-form anomaly analysis, which together with the confi-dence analysis provide more reliable decisions; • we carry out extensive experiments on several bench-marks, considering new and challenging scenarios, and demonstrate that our method achieves state-of-the-art performance in both detection and localization tasks.2. Related Work Forensic artifacts. Low-level artifacts are caused by the in-camera acquisition process, such as the sensor, the lens, the color filter array or the JPEG quantization tables. In all cases, these are very weak traces, that can be highlighted by suppressing the image content by means of high-pass filters or denoising. The most common filters used for this task are the spatial rich models (SRM) [16], often included as a pre-processing step in some CNN models for forensic analysis. In [35] a set of around 30 fixed high-pass filters are used, instead in [3] the high-pass filters are learnt during training. These fixed and trainable filters have been used in many other subsequent works to perform a noise sensitive analysis [8, 21, 42, 44, 47]. A different perspective is considered in [12], where the extraction of low-level artifacts is carried out by learning a sort of “camera model fingerprint”, the noiseprint, that bears traces of in-camera processing steps. When a manipulation is present, the noiseprint structure is absent and this anomaly is interpreted as a forgery. In this work we leverage noiseprint and further enhance it so as to make it work in more challenging scenarios. In general, low-level features are combined with high-level ones to carry out a more effective detection. Pioneer-ing work in the field is the two-branch approach proposed in [47], where the features of the noise and RGB stream are combined together through bilinear pooling. Other works also propose late fusion [8], while others [21, 39, 42] per-form early fusion or even middle fusion [24]. We belong to this last category, but use an approach that fuses noise and RGB channels using cross-modal feature calibration [28]. Forgery detection vs localization. The majority of the state-of-the-art methods focus on image localization, with architectures often inspired by semantic segmentation, and detection is a byproduct of such analysis [11]. The in-tegrity score is computed by a suitable post-processing of the localization heatmap aimed at extracting a global deci-sion statistic, such as the average or the maximum value of the heatmap [5, 22, 42]. Only a few works explicitly treat the detection problem. In particular, some recent ap-proaches [8,29,39,46] jointly train the model both for local-ization and detection through suitable losses at image-level. In [39, 46] global average pooling is applied to the middle features, while in [8] max average pooling is carried out on the localization heatmap. A different perspective can be found in [31], where it is proposed to analyze the whole image avoiding resizing (so as not to lose precious foren-sics traces) through a gradient checkpointing technique, that helps for the joint optimization of patch-level feature extrac-tion and image-level decision. Different from current literature, in this paper we explic-itly design a forgery detection module that takes as input the anomaly-based map and the confidence map. This addi-20607 Anomaly DecoderLayer 1 Layer 2 Layer 3 L4Encoder Confidence DecoderForgery DetectorPoolingx rfa cy Integrity Scoreh Noiseprint++ ExtractorFigure 2. TruFor framework. The Noiseprint++ extractor takes the RGB image to obtain a learned noise-sensitive fingerprint. The encoder uses both the RGB input and Noiseprint++ for jointly computing the features that will be used by the anomaly decoder and the confidence decoder for pixel-level forgery localization and confidence estimation, respectively. The forgery detector exploits the localization map and the confidence map to make the image-level decision. The different colors identify the modules learned in each of the three training phases. tional input is cr
Chen_Train-Once-for-All_Personalization_CVPR_2023
Abstract We study the problem of how to train a “personalization-friendly” model such that given only the task descriptions, the model can be adapted to different end-users’ needs, e.g., for accurately classifying different subsets of objects. One baseline approach is to train a “generic” model for classi-fying a wide range of objects, followed by class selection. In our experiments, we however found it suboptimal, perhaps because the model’s weights are kept frozen without being personalized. To address this drawback, we propose Train-once-for-AllPER sonalization ( TAPER ), a framework that is trained just once and can later customize a model for different end-users given their task descriptions. TAPER learns a set of “basis” models and a mixer predictor, such that given the task description, the weights (not the pre-dictions!) of the basis models can be on the fly combined into a single “personalized” model. Via extensive experi-ments on multiple recognition tasks, we show that TAPER consistently outperforms the baseline methods in achieving a higher personalized accuracy. Moreover, we show that TAPER can synthesize a much smaller model to achieve comparable performance to a huge generic model, mak-ing it “deployment-friendly” to resource-limited end de-vices. Interestingly, even without end-users’ task descrip-tions, TAPER can still be specialized to the deployed con-text based on its past predictions, making it even more “personalization-friendly”.
1. Introduction Recent years have witnessed multiple breakthroughs in visual recognition [10, 17, 23, 25, 36], thanks to the advance in deep learning and the accessibility to large datasets. Specifically, existing works have shown the possibility to train a gigantic and versatile “generic” model capable of classifying a wide range of over tens of thousands of objects [22, 33], rendering the promising future towards general-purposed AI. *Work done as a student researcher at Google Research. Figure 1. Examples of personalization via task description. We propose a useful formulation: train-once-for-all personalization. Our “personalization-friendly” framework TAPER can on the fly reply to each user’s request with a personalized model promptly conditioned on the task description only. However, from an end-user’s perspective, we often do not need such a versatility at once . Instead, users more of-ten look for models that are specialized to their requests, e.g., for accurately classifying a few but frequently encoun-tered or safety-critical objects in their environments. Tak-ing ImageNet-1K [9] as an example, a ResNet-152 classi-fier [17] can achieve around 80% accuracy in recognizing each of the 1K objects, which, while exciting to the vision community, may sound terrible to a visually-impaired user who seeks to smoothly interact with a handful of everyday objects. A better solution for end-users is perhaps to con-struct “personalized” models dedicated to their needs, e.g., train a 20-way classifier for everyday objects to attain an ac-curacy closer to 100% . Importantly, a personalized model usually requires a smaller capacity/size than a generic one, making it easier to deploy to resource-limited devices. Personalization is by no means a new concept. A na ¨ıve way to achieve it is to retrain a new model upon request, using the corresponding data. Doing so, however, is hardly scalable from a service provider’s point of view: the com-putation for training simply grows linearly with the number of users and their requests. The training latency can also de-grade the user experience. Suppose the service provider has sufficient data and is capable of training a generic model, re-training may just sound superfluous: if the objects the end-user cares about are already seen in training the generic model, why bother training on them again for personaliza-tion? In this paper, we therefore ask: This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11818 Can we train a “personalization-friendly” model such that after deployed, it can be easily specialized and rapidly condensed based on the end-user’s task description , without further training ? To begin with, we investigate a fairly simple idea, which is to train a (large) generic model, followed by class se-lection for personalization — chopping off the classes that are not of the user’s interest from the classification head. While extremely straightforward without further training, this idea can already boost the aforementioned ResNet-152 to 95% accuracy on recognizing 20 classes. Nevertheless, this approach does not condense the model for computa-tion and memory efficiency. One may resolve this problem by training a smaller generic model like ResNet-18, whose size is roughly1 5of ResNet-152. However, with limited ca-pacity, ResNet-18 after class selection can only attain 92% accuracy on classifying 20 classes. We hypothesize if we can somehow personalize the backbone weights as well, the model will be able to better utilize its capacity to tackle the shrunken scope of end-users’ tasks. To address these deficiencies while keeping the per-sonalization process simple, we propose Train-once-for-AllPER sonalization (TAPER), a novel framework that is trained just once and can later head-to-toe customizes a condensed model on the fly for different end-users and re-quests, given their task descriptions . At the core of TAPER is a set of shareable “basis” mod-els inspired by [5, 12], and a “mixer” predictor. The basis models have the same neural network architecture, each of which is expected to capture a certain specialty and there-fore can be smaller in size than a large generic model. The mixer predictor then takes the user’s task description ( e.g., “Classify bicycle, pedestrian, tree, obstacle for me. ” ) as input, and produces coefficients to linearly combine the weights (not predictions!) of the basis models, condensing them into a “personalized” model on the fly. As TAPER adapts to users by predicting corresponding coefficients, not by adjusting the bases, it requires no retraining and enjoys parameter efficiency ( e.g., for cloud services). Moreover, since the resulting personalized model is just like a basis model in size, it enjoys computation and memory efficiency during inference and is suitable for edge deployment. We introduce a stage-wise training procedure to effec-tively learn the bases and the mixer predictor. We found that na ¨ıve end-to-end training for optimizing personalized accuracy often results in inferior bases that either general-ize poorly or are not specialized. We thus dedicate each stage to one desired property, starting with training each basis to generically classify all classes, followed by special-izing them to different but fixed portions of data. The final stage then jointly refines the bases, together with learning the mixer predictor, to synthesize classifiers for randomly sampled tasks on the fly to optimize personalized accuracy.We validate TAPER on three visual recognition datasets, including ImageNet [9], iNaturalist [39], and Domain-Net [31], each of which captures a different personalization scenario. TAPER consistently outperforms the baselines in achieving a higher personalized accuracy. For instance, on ImageNet, TAPER is able to synthesize a ResNet-18 to achieve 96% accuracy on classifying 20 classes, 4%higher than ResNet-18 with class selection. The accuracy is even higher than ResNet-152 with class selection while using1 5 of the model size. Interestingly, even without end-users’ task descriptions, we show that TAPER can still be “self-specialized” to the deployed environment conditioned on its past predictions. Most importantly, none of these im-provements require further training, making TAPER truly “personalization-friendly.”
Jayasundara_FlexNeRF_Photorealistic_Free-Viewpoint_Rendering_of_Moving_Humans_From_Sparse_Views_CVPR_2023
Abstract We present FlexNeRF , a method for photorealistic free-viewpoint rendering of humans in motion from monocular videos. Our approach works well with sparse views, which is a challenging scenario when the subject is exhibiting fast/complex motions. We propose a novel approach which jointly optimizes a canonical time and pose configuration, with a pose-dependent motion field and pose-independent temporal deformations complementing each other. Thanks to our novel temporal and cyclic consistency constraints along with additional losses on intermediate representation such as segmentation, our approach provides high quality outputs as the observed views become sparser. We empiri-cally demonstrate that our method significantly outperforms the state-of-the-art on public benchmark datasets as well as a self-captured fashion dataset. The project page is avail-able at: https://flex-nerf.github.io/ .
1. Introduction Free-viewpoint rendering of a scene is an important problem often attempted under constrained settings: on subjects demonstrating simple motion carefully captured with multiple cameras [17, 19, 20]. However, photoreal-istic free-viewpoint rendering of moving humans captured from a monocular video still remains an unsolved challeng-ing problem, especially with sparse views. Neural radiance fields (NeRF) have emerged as a pop-ular tool to learn radiance fields from images/videos for novel view-point rendering. Previous approaches assume multiple view-points and often fail on non-rigid human mo-tions. Human-specific NeRFs have recently become pop-ular for learning models using input videos [27, 40]. The current state-of-art approaches such as HumanNeRF [40] have shown impressive progress in this domain. However, there remain several challenges. Firstly, approaches such as HumanNeRF [40] utilize a pose prior and use a canoni-*Part of the work was done while the author was an intern at Amazon.cal configuration ( e.g. T-pose) for optimization, which may be well outside the set of observed poses. The underlying optimization becomes challenging especially as the num-ber of observed views become sparse. In contrast, we se-lect a pose from the available set of poses as the canon-ical pose-configuration, similar to previous pose-free ap-proaches such as D-NeRF [29]. This enables best of both worlds; it becomes easier to learn a motion field mapping due to smaller deformations while using a pose prior. In addition, having the canonical view in the training data pro-vides a strong prior for the optimization of the canonical pose itself. Finally, it allows us to optimize the canonical configuration with our novel pose-independent temporal de-formation . We demonstrate that this architectural change provides significantly better results compared to existing approaches [24, 40]. In addition, approaches such as HumanNeRF [40] de-pend on the estimated pose for the canonical configuration optimization. Errors in the initial pose estimation, for ex-ample, due to strong motion blur cause challenges in pose correction. The underlying assumption that the non-rigid motion is pose-dependent often fails in scenarios with com-plex clothing and accessories, hair styles, and large limb movements. Our proposed pose-independent temporal de-formation helps to supplement the missing information in its pose-dependent counterpart. To this end, we introduce FlexNeRF, a novel approach for jointly learning a pose-dependent motion field andpose-independent temporal deformation within the NeRF frame-work for modeling human motions. Moreover, we intro-duce a novel cycle consistency loss in our framework, fur-ther capitalizing on the fact that our canonical pose corre-sponds to one of the captured frames. The consistency reg-ularizes the estimated deformation fields by mapping back and forth between each view and the canonical pose. More-over, the information content of any frame in a motion se-quence has a strong similarity to that of its neighbours. Hence, we propose to utilize this contextual information present in a consecutive set of frames to aid learning by This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21118 imposing a temporal consistency loss. We additionally reg-ularize the training by adding a supplementary loss based on the segmentation masks. Our approach allows photo-realistic rendering of a moving human even when sparse views are available, by supplementing the pose-dependent motion field with additional information during learning: (i) pose-independent temporal deformation with ample pixel-wise correspondences beyond the (typically 24) pose point-correspondences, and (ii) consistency constraints/losses. In summary, our paper makes the following contributions: • We propose a novel approach to learn pose-independent temporal deformation to complement the pose-dependent motion for modeling humans in video, using one of the views as the canonical view. • We propose a novel cyclic-consistency loss to regular-ize the learned deformations. • We propose a temporal-consistency loss to aid learn-ing with contextual information present in neighbour-ing frames, as well as to maintain consistency across consecutive rendered frames. • Our approach outperforms the state-of-the-art ap-proaches, with significant improvement in case of sparse views.
Abousamra_Topology-Guided_Multi-Class_Cell_Context_Generation_for_Digital_Pathology_CVPR_2023
Abstract In digital pathology, the spatial context of cells is impor-tant for cell classification, cancer diagnosis and prognosis. To model such complex cell context, however, is challeng-ing. Cells form different mixtures, lineages, clusters and holes. To model such structural patterns in a learnable fashion, we introduce several mathematical tools from spa-tial statistics and topological data analysis. We incorporate such structural descriptors into a deep generative model as both conditional inputs and a differentiable loss. This way, we are able to generate high quality multi-class cell layouts for the first time. We show that the topology-rich cell layouts can be used for data augmentation and improve the perfor-mance of downstream tasks such as cell classification.
1. Introduction Deep learning has advanced our learning ability in digital pathology. Deep-learning-based methods have achieved im-pressive performance in various tasks including but not lim-ited to: cell detection and classification [2,23,24,52], nuclei instance segmentation [8, 18, 19, 21, 26, 32–34, 42, 51], sur-vival prediction and patient outcome [1,28,30,49], interpre-tation of multiplex immunohistochemistry and immunoflu-orescence imagery [3, 14–16] and many others. Despite the rapid progress in recent years, pathology image analysis is still suffering from limited observations. The available annotated images are still scarce relative to the highly heterogeneous and complex tumor microenviron-ment driven by numerous biological factors. The limitation in training data constraints a learning algorithm’s predic-tion power. To this end, one solution is to train generative models that can generate realistic pathology images to aug-ment existing data. Generative models have been proposed to help learning methods in various tasks such as nuclei seg-mentation [7, 21], survival prediction [44] and cancer grade prediction [47]. Generating pathology images usually involves two steps: (1) generating spatial layout of cells and (2) filling in stains and textures inside and outside cell nuclei masks. Most ex-Figure 1. Overview of our multi-class cell context generator. isting methods only focus on the second step. They either generate random cell positions [21] or directly copy nuclei masks from existing images [17]. These methods miss the opportunity to learn the rich cell spatial context carrying critical information about cancer biology. Spatial context includes how different types of cells (tu-mor, lymphocyte, stromal, etc) are distributed around each other, as well as how they form different structural patterns such as clusters, holes and lineages. Plenty of evidence have demonstrated the importance of spatial context in can-cer diagnosis and prognosis [31, 53]. One good example is the clinical significance of tumor infiltrating lymphocytes (TILs), i.e., lymphocytes residing within the border of in-vasive tumors [37–40]. The spatial distribution of stromal cells in the vicinity of tumor has been shown to be directly related to cancer outcomes [35,53]. Tumor budding, i.e., the presence of isolated or small clusters of tumor cells at the invasive tumor front, is a prognosis biomarker associated with an increased risk of lymph node metastasis in colorec-tal carcinoma and other solid malignancies [29]. In prostate cancer tissue samples, plenty of loopy cellular structures are formed corresponding to glands. Their integrity, known as the Gleason score, is a good indicator of cancer progres-sion [48]. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3323 Figure 2. Sample results from our cell layout generator. Our generated samples have similar spatial characteristics as the corresponding reference layouts. Given the biological significance of cell spatial context, we hypothesize that being able to model and generate cell configurations will benefit various downstream tasks. To model the complex cell spatial context, the main challenge is the limited information one can rely on –coordinates and types of cells. This makes it hard for even powerful deep learning methods [27] to learn the underlying distribution. To better model the spatial context, we argue that princi-pled mathematical machinery has to be incorporated into the deep learning framework. Formally, we introduce the classic K-function from spatial statistics [5], as well as the theory of persistent homology [13], to model the spatial dis-tribution of multi-class cells and their structural patterns. These mathematical constructs have been shown to corre-late with clinical outcomes [4]. However, they have not been used in the generation of pathology images. We incorporate these spatial topological descriptors into a deep generative model. Our generative model takes an in-put pathology image and generates a new cell layout with similar spatial and topological characteristics. To enforce the expected spatial characteristics, we propose a novel cell configuration loss based on the persistent homology and spatial statistics of input cell spatial configuration. The loss compares the generated and the reference cell configura-tions and match their topology in view of a topological mea-sure called persistence diagram. The loss enforces holes in the generated cell configuration to be one-to-one matched to holes in the reference cell configuration, i.e., having similar shapes and density. A direct topological matching via persistence diagrams is agnostic of the cell type composition. This is undesirable; we do not want to match a tumor cell hole to a stromal cellhole. To this end, we also incorporate spatial statistics mea-sure, i.e., cross K-functions, into the loss. This way, holes composed of different types of cells are matched properly. Using the generated cell spatial configuration, we generate the nuclei mask, staining and texture. See Fig. 1 for an illustration of the generation pipeline. Also see Fig. 2 for examples of the generated cell lay-outs. The generated cell layouts have very similar spatial and structural characteristics as the reference/input image. This is not guaranteed with previous methods using ran-domly generated masks. In the experiment section, we pro-vide comprehensive comparisons to verify the benefit of our method. We will also show that the augmented images can be used to train downstream tasks such as cell classification. To summarize, our contributions are as follows: • We propose the first generative model to learn cell spa-tial context from pathology images. • We introduce multi-class spatial context descriptors based on spatial statistics and topology. These descrip-tors are used as conditional input for the generator. • We propose a novel cell configuration loss function to enforce the desired behavior of spatial distribution and topology. The loss matches holes of generated cell lay-out and holes of the reference cell layout, in shape, density, and cell type composition. • We show that the generated layouts can be used to generate synthetic H&E images for data augmenta-tion. We show the efficacy of the augmentation data in downstream tasks such as cell classification. 3324 Figure 3. Illustration of the filtration process of the distance transform map to obtain the persistence homology. Red dots are the tumor cells in the original image. The blue dots in the last figure (f) are the centers for the holes, the saddle points which are obtained once a hole dies or disappears. We stress that the benefit of modeling cell spatial con-text is beyond data augmentation. Modeling the spatial context will provide the foundation for better understand-ing and quantifying the heterogeneous tumor microenviron-ment, and correlate with genomics and clinical outcomes. This work is one step towards such direction.
Fang_DepGraph_Towards_Any_Structural_Pruning_CVPR_2023
Abstract Structural pruning enables model acceleration by re-moving structurally-grouped parameters from neural net-works. However, the parameter-grouping patterns vary widely across different models, making architecture-specific pruners, which rely on manually-designed grouping schemes, non-generalizable to new architectures. In this work, we study a highly-challenging yet barely-explored task, any structural pruning, to tackle general structural pruning of arbitrary architecture like CNNs, RNNs, GNNs and Transformers. The most prominent obstacle towards this goal lies in the structural coupling, which not only forces different layers to be pruned simultaneously, but also expects all removed parameters to be consistently unimpor-tant, thereby avoiding structural issues and significant per-formance degradation after pruning. To address this prob-lem, we propose a general and fully automatic method, De-pendency Graph (DepGraph), to explicitly model the depen-dency between layers and comprehensively group coupled parameters for pruning. In this work, we extensively evalu-ate our method on several architectures and tasks, including ResNe(X)t, DenseNet, MobileNet and Vision transformer for images, GAT for graph, DGCNN for 3D point cloud, alongside LSTM for language, and demonstrate that, even with a simple norm-based criterion, the proposed method consistently yields gratifying performances.
1. Introduction The recent emergence of edge computing applications calls for the necessity for deep neural compression [16, 22, 25,33,34,61,65–67,69,75]. Among the many network com-pression paradigms, pruning has proven itself to be highly effective and practical [7, 11, 30, 31, 44, 58, 59, 74]. The goal of network pruning is to remove redundant parame-ters from a given network to lighten its size and potentially speed up the inference. Mainstream pruning approaches can *Corresponding author Norm Multi-HeadAttention Norm MLP++ Conv1 BN1 ReLU+ Conv2 BN2 ReLU (a) CNNs(b) Transformers 𝜎 𝜎 tanh 𝜎×××+tanh(c) RNNs (d) GNNs GNNLayer Figure 1. Parameters from different layers are inherently depen-dent on each other across network architectures, which forces sev-eral layers to be pruned simultaneously. For instance, to prune the Conv 2in (a), all other layers {Conv 1, BN 1, BN 2}within the block must be pruned as well. In this work, we introduce a generic scheme, termed as Dependency Graph, to explicitly account for such dependencies and execute the pruning of arbitrary architec-ture in a fully automatic manner. be roughly categorized into two schemes, structurual prun-ing[4, 29, 71] and unstructurual pruning [8, 13, 44]. The core difference between the two lies in that, structural prun-ing changes the structure of neural networks by physically removing grouped parameters, while unstructural pruning conducts zeroing on partial weights without modification to the network structure. Compared to unstructural ones, structural pruning does not rely on specific AI accelerators or software to reduce memory consumption and computa-tional costs, thereby finding a wider domain of applications in practice [38, 68]. Nevertheless, the nature of structural pruning per se makes itself a challenging task, especially for modern deep neural networks with coupled and complex internal struc-tures. The rationale lies in that, deep neural networks are built upon a large number of basic modules like convolu-tion, normalization, or activation, yet these modules, either parameterized or not, are intrinsically coupled through the intricate connections [17, 23]. As a result, even when we seek to remove only one channel from a CNN illustrated in This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16091 Figure 1(a), we have to take care of its inter-dependencies to all layers simultaneously, otherwise we will eventually get a broken network. To be exact, the residual connection requires the output of two convolutional layers to share the same number of channels and thus forces them to be pruned together [20, 40, 71]. The same goes for structural pruning on other architectures like Transformers, RNNs and GNNs as illustrated in Figs. 1(b-d). Unfortunately, dependency does not only emerge in residual structures, which can be infinitely complex in mod-ern models [23, 46]. Existing structural approaches have largely relied on case-by-case analyses to handle depen-dencies in networks [29, 40]. Despite the promising re-sults achieved, such a network-specific pruning approach is effort-consuming. Moreover, these methods are not directly generalizable, meaning that the manually-designed group-ing scheme is not transferable to other network families or even the network architectures in the same family, which in turn, greatly limit their industrial applications. In this paper, we strive for a generic scheme towards any structural pruning , where structural pruning over arbitrary network architectures is executed in an automatic fashion, At the heart of our approach is to estimate the Dependency Graph (DepGraph), which explicitly models the interdepen-dency between paired layers in neural networks. Our moti-vation to introduce DepGraph for structural pruning stems from the observation that, structural pruning at one layer ef-fectively “triggers” pruning at adjacent layers, which further leads to a chain effect like {BN 2←Conv 2→BN 1→Conv 1} as shown in Figure 1(a). As such, to trace the dependen-cies across different layers, we may decompose and model the dependency chain as a recursive process, which natu-rally boils down to the problem of finding the maximum connected components in the graph, and can be solved in O(N)complexity via graph traversal. It is also worth noting that in structural pruning, grouped layers are pruned simultaneously, which expects all re-moved parameters in the same group to be consistently unimportant. This brings certain difficulties to existing im-portance criteria designed for a single layer [20, 27, 29, 42]. To be exact, the parameter importance in a single layer no longer reveals correct importance due to the entangle-ment with other parameterized layers. To address this prob-lem, we fully leverage the comprehensive ability of depen-dency modeling powered by DepGraph to design a “group-level” importance criterion, which learns consistent sparsity within groups, so that those zeroized groups can be safely removed without too much performance degradation. To validate the effectiveness of DepGraph, we apply the proposed method to several popular architectures includ-ing CNNs [23, 40], Transformers [10], RNNs [12, 53], and GNNs [55], where competitive performance is achieved compared to state-of-the-art methods [7, 32, 58, 71]. ForCNN pruning, our method obtains a 2.57×accelerated ResNet-56 model with 93.64% accuracy on CIFAR, which is even superior to the unpruned model with 93.53% ac-curacy. And on ImageNet-1k, our algorithm achieves more than 2 ×speed-up on ResNet-50, with only 0.32% performance lost. More importantly , our method can be readily transferred to various popular networks, includ-ing ResNe(X)t [40, 63], DenseNet [23], VGG [51], Mo-bileNet [48], GoogleNet [54] and Vision Transformer [10], and demonstrate gratifying results. Besides, we also con-duct further experiments on non-image neural networks, in-cluding LSTM [12] for text classification, DGCNN [60] for 3D point cloud, and GAT [55] for graph data, where our method achieves from 8 ×to 16×acceleration without a significant performance drop. In sum, our contribution is a generic pruning scheme to-wards any structural pruning, termed as Dependency Graph (DepGraph), which allows for automatic parameter group-ing and effectively improves the generalizability of struc-tural pruning over various network architectures, including CNNs, RNNs, GNNs and Vision Transformers.
Guo_Vid2Avatar_3D_Avatar_Reconstruction_From_Videos_in_the_Wild_via_CVPR_2023
Abstract We present Vid2Avatar, a method to learn human avatars from monocular in-the-wild videos. Reconstructing humans that move naturally from monocular in-the-wild videos is difficult. Solving it requires accurately separating humans from arbitrary backgrounds. Moreover, it requires recon-structing detailed 3D surface from short video sequences, making it even more challenging. Despite these challenges, our method does not require any groundtruth supervision or priors extracted from large datasets of clothed human scans, nor do we rely on any external segmentation mod-ules. Instead, it solves the tasks of scene decomposition and surface reconstruction directly in 3D by modeling both the human and the background in the scene jointly, parameter-ized via two separate neural fields. Specifically, we define a temporally consistent human representation in canonical space and formulate a global optimization over the back-ground model, the canonical human shape and texture, and per-frame human pose parameters. A coarse-to-fine sam-pling strategy for volume rendering and novel objectives are introduced for a clean separation of dynamic human and static background, yielding detailed and robust 3D hu-man reconstructions. The evaluation of our method shows improvements over prior art on publicly available datasets. †Corresponding author1. Introduction Being able to reconstruct detailed avatars from readily available “in-the-wild” videos, for example recorded with a phone, would enable many applications in AR/VR, in human-computer interaction, robotics and in the movie and sports industry. Traditionally, high-fidelity 3D reconstruc-tion of dynamic humans has required calibrated multi-view systems [9, 10, 19, 28, 32, 48, 52], which are expensive and require highly-specialized expertise to operate. In contrast, emerging applications such as the Metaverse require more light-weight and practical solutions in order to make the digitization of humans a widely available technology. Re-constructing humans that move naturally from monocular in-the-wild videos is clearly a difficult problem. Solving it requires accurately separating humans from arbitrary back-grounds, without any prior knowledge about the scene or the subject. Moreover it requires reconstructing detailed 3D surface from short video sequences, made even more chal-lenging due to depth ambiguities, the complex dynamics of human motion and the high-frequency surface details. Traditional template-based approaches [15, 16, 61] can-not generalize to in-the-wild settings due to the requirement for a pre-scanned template and manual rigging. Methods that are based on explicit mesh representations are limited to a fixed topology and resolution [3, 8, 14, 36]. Fully-supervised methods that directly regress 3D surfaces from This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12858 images [17,18,21,43,44,58,72] struggle with difficult out-of-distribution poses and shapes, and do not always pre-dict temporally consistent reconstructions. Fitting neural implicit surfaces to videos has recently been demonstrated [23, 25, 42, 49, 50, 55, 71]. However, these methods depend on pre-segmented inputs and are therefore not robust to un-controlled visual complexity and are upper-bounded in their reconstruction quality by the segmentation method. In this paper, we introduce Vid2Avatar, a method to learn human avatars from monocular in-the-wild videos without requiring any groundtruth supervision or priors extracted from large datasets of clothed human scans, nor do we rely on any external segmentation modules. We solve the tasks of scene separation and surface reconstruction directly in 3D. To achieve this, we model both the foreground (i.e., human) and the background in the scene implicitly, param-eterized via two separate neural fields. A key challenge is to associate 3D points to either of these fields without re-verting to 2D segmentation. To tackle this challenge, our method builds-upon the following core concepts: i) We de-fine a single temporally consistent representation of the hu-man shape and texture in canonical space and leverage the inverse mapping of a parametric body model to learn from deformed observations. ii) A global optimization formu-lation jointly optimizes the parameters of the background model, the canonical human shape and its appearance, and the pose estimates of the human subject over the entire se-quence. iii) A coarse-to-fine sampling strategy for volume rendering that naturally leads to a separation of dynamic foreground and static background. iv) Novel objectives that further improve the scene decomposition and lead to sharp boundaries between the human and the background, even when both are in contact (e.g., around the feet), yielding better geometry and appearance reconstructions. More specifically, we leverage an inverse-depth param-eterization in spherical coordinates [70] to coarsely sepa-rate the static background from the dynamic foreground. Within the foreground sphere, we leverage a surface-guided volume rendering approach to attain densities via the con-version method proposed in [63]. Importantly, we warp all sampled points into canonical space and update the human shape field dynamically. To attain sharp boundaries be-tween the dynamic foreground and the scene, we introduce two optimization objectives that encourage a quasi-discrete binary distribution of ray opacities and penalize non-zero opacity for rays that do not intersect with the human. The final rendering of the scene is then attained by differentiable composited volume rendering. We show that this optimization formulation leads to clean scene decomposition and high-quality 3D reconstruc-tions of the human subject. In detailed ablations, we shed light on the key components of our method. Furthermore, we compare to existing methods in 2D segmentation, novelview synthesis, and reconstruction tasks, showing that our method performs best across several datasets and settings. To allow for quantitative comparison across methods, we contribute a novel semi-synthetic test set that contains ac-curate 3D geometry of human subjects. Finally, we demon-strate the ability to reconstruct different humans in detail from online videos and hand-held mobile phone video clips. In summary, our contributions are: • a method to reconstruct detailed 3D avatars from in-the-wild monocular videos via self-supervised scene decomposition; and • to achieve robust and detailed 3D reconstructions of the human even under challenging poses and environ-ments without requiring external segmentation meth-ods; and • a novel semi-synthetic testing dataset that for the first time allows comparing monocular human reconstruc-tion methods on realistic scenes. The dataset contains rich annotations of the 3D surface. 2. Related Work Reconstructing Human from Monocular Video Tradi-tional works for monocular human performance capture re-quire personalized rigged templates as prior and track the pre-defined human model based on 2D observations [15, 16, 61]. These works require pre-scanning of the performer and post-processing for rigging, preventing such methods from being deployed to real-life applications. Some meth-ods attempt to save the need for pre-scanning and manual rigging [3,8,14,36]. However, the explicit mesh representa-tion is limited to a fixed resolution and cannot represent de-tails like the face. Regression-based methods that directly regress 3D surfaces from images have demonstrated com-pelling results [4, 12, 17, 18, 21, 43, 44, 58, 72]. However, they require high-quality 3D data for supervision and can-not maintain the space-time coherence of the reconstruc-tion over the whole sequence. Recent works fit implicit neural fields to videos via neural rendering to obtain artic-ulated human models [23–25, 42, 49, 50, 55, 71]. Human-NeRF [55] extends articulated NeRF to improve novel view synthesis. NeuMan [25] further adds a scene NeRF model. Both methods model the human geometry with a density field, only yielding a noisy, and often low-fidelity human reconstruction. SelfRecon [23] deploys neural surface ren-dering [64] to achieve consistent reconstruction over the sequence. However, all aforementioned methods rely on pre-segmented inputs and are therefore not robust to uncon-trolled visual complexity and are upper-bounded in their re-construction quality by the external segmentation method. In contrast, our method solves the tasks of scene decompo-sition and surface reconstruction jointly in 3D without using external segmentation modules. 12859 Reconstructing Human from Multi-view/Depth The high fidelity 3D reconstruction of dynamic humans has re-quired calibrated dense multi-view systems [9, 10, 19, 28, 32, 48, 52, 66] which are expensive and laborious to oper-ate and require highly-specialized expertise. Recent works [20, 22, 29, 39, 41, 54, 59, 60] attempt to reconstruct hu-mans from more sparse settings by deploying neural ren-dering. Depth-based approaches [6, 37, 38] reconstruct the human shape by fusing depth measurements across time. Follow-up work [7, 11, 30, 47, 67, 68] builds upon this con-cept by incorporating an articulated motion prior, a para-metric body shape prior, and a more expressive body model. While the aforementioned methods achieve compelling re-sults, they still require a specialized capturing setup and are hence not applicable to in-the-wild settings. In contrast, our method recovers the dynamic human shape in the wild from a monocular RGB video as the sole input. Moving Object Segmentation Traditional research in moving object segmentation has been extensively con-ducted at the image level (i.e. 2D). One line of research relies on motion clues to segment objects with different op-tical flow patterns [5, 40, 57, 62, 65], while another line of work, termed video matting [26,31,45] is trained on videos with human-annotated masks to directly regress the alpha-channel values during inference. Those approaches are not without limitations, as they focus on image-level segmen-tation and incorporate no 3D knowledge. Thus, they can-not handle complicated backgrounds without enough color contrast between the human and the background. Recent works learn to decompose dynamic objects and the static background simultaneously in 3D by optimizing multiple NeRFs [46,51,56,69]. Such methods perform well for
non-complicated dynamic objects but are not directly applicable to articulated humans with intricate motions. 3. Method We introduce Vid2Avatar, a method for detailed geome-try and appearance reconstruction of implicit neural avatars from monocular videos in the wild. Our method is schemat-ically illustrated in Fig. 2. Reconstructing humans from in-the-wild videos is clearly challenging. Solving it requires accurately segmenting humans from arbitrary backgrounds without any prior knowledge about the appearance of the scene or the subject and requires reconstructing detailed 3D surface and appearance from short video sequences. In con-trast to prior works that utilize off-the-shelf 2D segmenta-tion tools or manually labeled masks, we solve the tasks of scene decomposition and surface reconstruction directly in 3D. To achieve this, we model both the human and back-ground in the scene implicitly, parameterized via two sep-arate neural fields which are learned jointly from images to composite the whole scene. To alleviate the ambiguityof in-contact body and scene parts and to better delineate the surfaces, we contribute novel objectives that leverage the dynamically updated human shape in canonical space to regularize the ray opacity. We parameterize the 3D geometry and texture of clothed humans as a pose-conditioned implicit signed-distance field (SDF) and texture field in canonical space (Sec. 3.1). We then model the background using a separate neural radi-ance field (NeRF). The human shape and appearance fields alongside the background field are learned from images jointly via differentiable composited neural volume render-ing (Sec. 3.2). Finally, we leverage the dynamically up-dated canonical human shape to regularize the ray opacities (Sec. 3.3). The training is formulated as global optimization to jointly optimize the dynamic foreground and static back-ground fields, and the per-frame pose parameters (Sec. 3.4). 3.1. Implicit Neural Avatar Representation Canonical Shape Representation. We model the human shape in canonical space to form a single, temporally con-sistent representation and use a neural network fH sdfto pre-dict the signed distance value for any 3D point xcin this space. To model pose-dependent local non-rigid deforma-tions such as dynamically changing wrinkles on clothes, we concatenate the human pose θas an additional input and model fH sdfas: fH sdf:R3+nθ→R1+256. (1) The pose parameters θare defined analogously to SMPL [34], with dimensionality nθ. Furthermore, fH sdfout-puts global geometry features zof dimension 256. With slight abuse of notation, we also use fH sdfto refer to the SDF value only. The canonical shape Sis given by the zero-level set of fH sdf: S={xc|fH sdf(xc,θ) = 0}. (2) Skeletal Deformation. Given the bone transformation matrix Bifor joint i∈ {1, ..., n b}which are derived from the body pose θ, a canonical point xcis mapped to the de-formed point xdvia linear-blend skinning: xd=nbX i=1wi cBixc. (3) The canonical correspondence xcfor points xdin deformed space is defined by the inverse of Eq. 3: xc= (nbX i=1wi dBi)−1xd (4) Here, nbdenotes the number of bones in the transformation, andw(·)={w1 (·), ..., wnb (·)}represents the skinning weights 12860 Inverse WarpingInputSDF-basedV olume Rendering Rendered BackgroundRendered ForegroundV olume Rendering Normal Trainable Parameters Canonical Texture Canonical SDF Loss Function ℒComposited V olume Rendering Sec. 3.2 Scene Decompn. Objectives Sec. 3.3Implicit Neural Avatar Sec. 3.1 +++++ Figure 2. Method Overview. Given a ray rwith camera center oand ray direction v, we sample points densely ( xd) and coarsely ( xb) along the ray for the spherical inner volume and outer volume respectively. Within the foreground sphere, we warp all sampled points into canonical space via inverse warping and evaluate the SDF of the canonical correspondences xcvia the canonical shape network fH sdf. We calculate the spatial gradient of the sampled points in deformed space and concatenate them with the canonical points xc, the pose parameters θ, and the extracted geometry feature vectors zto form the input to canonical texture network fH rgbwhich predicts color values forxc. We apply surface-based volume rendering for the dynamic foreground and standard volume rendering for the background, and then composite the foreground and background components to attain the final pixel color. We minimize the loss Lthat compares the color predictions with the image observations along with novel scene decomposition objectives. forx(·). Here, deformed points xdare associated with the average of the nearest SMPL vertices’ skinning weights, weighted by the point-to-point distances in deformed space. Canonical points xcare treated analogously. Canonical Texture Representation. The appearance is also modeled in canonical space via a neural network fH rgb that predicts color values for 3D points xcin this space. fH rgb:R3+3+ nθ+256→R3. (5) We condition the canonical texture network on the nor-malndin deformed space, facilitating better disentangle-ment of the geometry and appearance. The normals are given by the spatial gradient of the signed distance field w.r.t. the 3D location in deformed space. Following [71], the spatial gradient of the deformed shape is given by: nd=∂fH sdf(xc,θ) ∂xd=∂fH sdf(xc,θ) ∂xc∂xc ∂xd =∂fH sdf(xc,θ) ∂xc(nbX i=1wi dBi)−1.(6) In practice we concatenate the canonical points xc, their normals, the pose parameters, and the extracted 256-dimensional geometry feature vectors zfrom the shape net-work to form the input to the canonical texture network. Forthe remainder of this paper, we denote this neural SDF with fH sdf(xc)and the RGB field as fH rgb(xc)for brevity. 3.2. Composited Volume Rendering We extend the inverted sphere parametrization of NeRF++ [70] to represent the scene: an outer volume (i.e., the background) covers the complement of a spherical inner volume (i.e., the space assumed to be occupied by the hu-man) and both are modeled by separate networks. The final pixel value is then attained via compositing. Background. Given the origin O, each 3D point xb= (xb, yb, zb) in the outer volume is reparametrized by the quadruple x′ b=(x′ b, y′ b, z′ b,1 r), where ∥(x′ b, y′ b, z′ b)∥= 1, (xb, yb, zb) =r·(x′ b, y′ b, z′ b). Here rdenotes the magnitude of the vector from the origin Otoxb. This parameterization of background points leads to improved numerical stability and assigns lower resolution to farther away points. For more details, we refer to [70]. Our method is trained with videos and the background is generally not entirely static. To compensate for dynamic changes in e.g., lighting, we condition the background network fBon a per-frame learn-able latent code ti. fB:R4+3+ nt→R1+3, (7) 12861 where fBtakes the 4D representation of the sampled back-ground point x′ b, viewing direction v, and time encoding ti with dimension ntas input, and outputs the density and the view-dependent radiance. Dynamic Foreground. We assume that the inner volume is occupied by a dynamic foreground – the human we seek to reconstruct. This requires different treatment compared to [70] where a static foreground is modeled via a vanilla NeRF. In contrast, we combine the implicit neural avatar representation (Sec. 3.1) with surface-based volume ren-dering [63]. Thus, we convert the SDF to a density σ by applying the scaled Laplace distribution’s Cumulative Distribution Function (CDF) to the negated SDF values ξ(xc) =−fH sdf(xc): σ(xc) =α1 2+1 2sign(ξ(xc))(1−exp(−|ξ(xc)| β)) , (8) where α,β >0are learnable parameters. Similar to [63], we sample Npoints on a ray r=(o,v) with camera center oand ray direction vin two stages – uniform and inverse CDF sampling. We then map the sam-pled points to canonical space via skeletal deformation and use standard numerical approximation to calculate the inte-gral of the volume rendering equation: CH(r) =NX i=1τifH rgb(xi c) (9) τi= exp −X j<iσ(xj c)δj (1−exp(−σ(xi c)δi))(10) where δ(i)is the distance between two adjacent samples. Here, the accumulated alpha value of a pixel, which repre-sents ray opacity, can be obtained by αH(r) =PN i=1τi. Scene Composition. To attain the final pixel value for a rayr, we raycast the human and background volumes sep-arately, followed by a scene compositing step. Using the parameterization of the background, we sample ralong the rayrto obtain sample points in the outer volume for which we query fB. The background component of a pixel is then given by the integrated color value CB(r)along the ray [35]. More details can be found in the Supp. Mat. The final pixel color is then the composite of the foreground and background color. C(r) =CH(r) + (1 −αH(r))CB(r). (11) 3.3. Scene Decomposition Objectives Learning to decompose the scene into a dynamic human and background by simply minimizing the distance betweenthe composited pixel value and image RGB value is still a severely ill-posed problem. This is due to the potentially moving scene, dynamic shadows, and general visual com-plexity. To this end, we propose two objectives that guide the optimization towards a clean and robust decoupling of the human from the background. Opacity Sparseness Regularization. One of the key components of our method is a loss Lsparse to regularize the ray opacity via the dynamically updated human shape in canonical space. We first warp sampled points into the canonical space and calculate the signed distance to the hu-man shape. We then penalize non-zero ray opacities for rays that do not intersect with the subject. This ray set is denoted asRi offfor frame i. Li sparse =1 |Ri off|X r∈Ri off|αH(r)|. (12) Note that we conservatively update the SDF of the human shape throughout the whole training process which leads to a precise association of human and background rays. Self-supervised Ray Classification. Even with the shape regularization from Eq. 12, we observe that the human fields still tend to model parts of the background due to the flexi-bility and expressive power of MLPs, especially if the sub-ject is in contact with the scene. To further delineate dy-namic foreground and background, inspired by [33], we in-troduce an additional loss term to encourage ray distribu-tions that contain either fully transparent or opaque rays: Li BCE=−1 |Ri|X r∈Ri(αH(r) log( αH(r)) + (1−αH(r)) log(1 −αH(r))),(13) where Ridenotes the sampled rays for frame i. This term penalizes deviations of the ray opacities from a bi-nary{0,1}distribution via the binary cross entropy loss. Intuitively this encourages the