bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 215
445
| abstract
stringlengths 820
2.37k
| title
stringlengths 24
147
| authors
sequencelengths 1
13
⌀ | id
stringclasses 1
value | type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 33
values | n_linked_authors
int64 -1
4
| upvotes
int64 -1
21
| num_comments
int64 -1
4
| n_authors
int64 -1
11
| Models
sequencelengths 0
1
| Datasets
sequencelengths 0
1
| Spaces
sequencelengths 0
4
| old_Models
sequencelengths 0
1
| old_Datasets
sequencelengths 0
1
| old_Spaces
sequencelengths 0
4
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=kBfPB0i98K | @inproceedings{
wen2024dualoptimized,
title={Dual-Optimized Adaptive Graph Reconstruction for Multi-View Graph Clustering},
author={Zichen Wen and Tianyi Wu and Yazhou Ren and Yawen Ling and Chenhang Cui and Xiaorong Pu and Lifang He},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=kBfPB0i98K}
} | Multi-view clustering is an important machine learning task for multi-media data, encompassing various domains such as images, videos, and texts. Moreover, with the growing abundance of graph data, the significance of multi-view graph clustering (MVGC) has become evident. Most existing methods focus on graph neural networks (GNNs) to extract information from both graph structure and feature data to learn distinguishable node representations. However, traditional GNNs are designed with the assumption of homophilous graphs, making them unsuitable for widely prevalent heterophilous graphs. Several techniques have been introduced to enhance GNNs for heterophilous graphs. While these methods partially mitigate the heterophilous graph issue, they often neglect the advantages of traditional GNNs, such as their simplicity, interpretability, and efficiency. In this paper, we propose a novel multi-view graph clustering method based on dual-optimized adaptive graph reconstruction, named DOAGC. It mainly aims to reconstruct the graph structure adapted to traditional GNNs to deal with heterophilous graph issues while maintaining the advantages of traditional GNNs. Specifically, we first develop an adaptive graph reconstruction mechanism that accounts for node correlation and original structural information. To further optimize the reconstruction graph, we design a dual optimization strategy and demonstrate the feasibility of our optimization strategy through mutual information theory. Numerous experiments demonstrate that DOAGC effectively mitigates the heterophilous graph problem. | Dual-Optimized Adaptive Graph Reconstruction for Multi-View Graph Clustering | [
"Zichen Wen",
"Tianyi Wu",
"Yazhou Ren",
"Yawen Ling",
"Chenhang Cui",
"Xiaorong Pu",
"Lifang He"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=k7T9PYFm0l | @inproceedings{
lu2024d,
title={3D Priors-Guided Diffusion for Blind Face Restoration},
author={Xiaobin Lu and Xiaobin Hu and Jun Luo and zhuben and paulruan and Wenqi Ren},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=k7T9PYFm0l}
} | Blind face restoration aims to restore a sharp face image from a degraded counterpart. Recent methods using GANs as priors have achieved many successful stories in this domain. However, these methods still struggle to balance realism and fidelity when facing complex degradation scenarios. In this paper, we propose a novel framework by embedding 3D facial priors into a denoising diffusion model, enabling the extraction of facial structure and identity information from 3D facial images. Specifically, the downgraded image undergoes initial processing through a pre-trained restoration network to obtain an incompletely restored face image. This image is then fed into the 3D Morphable Model (3DMM) to reconstruct a 3D facial image. During the denoising process, the structural and identity information is extracted from the 3D prior image using a multi-level feature extraction module. Given that the denoising process of the diffusion model primarily involves initial structure refinement followed by texture detail enhancement, we propose a time-aware fusion block (TAFB). This module can provide more effective fusion information for denoising as the time step changes. Extensive experiments demonstrate that our network performs favorably against state-of-the-art algorithms on synthetic and real-world datasets for blind face restoration. | 3D Priors-Guided Diffusion for Blind Face Restoration | [
"Xiaobin Lu",
"Xiaobin Hu",
"Jun Luo",
"zhuben",
"paulruan",
"Wenqi Ren"
] | Conference | poster | 2409.00991 | [
"https://github.com/838143396/3Diffusion"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=k7CMxauxVK | @inproceedings{
wu2024coast,
title={CoAst: Validation-Free Contribution Assessment for Federated Learning based on Cross-Round Valuation},
author={Hao Wu and Likun Zhang and Shucheng Li and Fengyuan Xu and Sheng Zhong},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=k7CMxauxVK}
} | In the federated learning (FL) process, since the data held by each participant is different,
it is necessary to figure out which participant has a higher contribution to the model performance.
Effective contribution assessment can help motivate data owners to participate in the FL training.
The research work in this field can be divided into two directions based on whether a validation dataset is required.
Validation-based methods need to use representative validation data to measure the model accuracy, which is difficult to obtain in practical FL scenarios.
Existing validation-free methods assess the contribution based on the parameters and gradients of local models and the global model in a single training round, which is easily compromised by the stochasticity of DL training.
In this work, we propose CoAst, a practical method to assess the FL participants' contribution without access to any validation data.
The core idea of CoAst involves two aspects: one is to only count the most important part of model parameters through a weights quantization, and the other is a cross-round valuation based on the similarity between the current local parameters and the global parameter updates in several subsequent communication rounds.
Extensive experiments show that the assessment reliability of CoAst is comparable to existing validation-based methods and outperforms existing validation-free methods.
We believe that CoAst will inspire the community to study a new FL paradigm with an inherent contribution assessment. | CoAst: Validation-Free Contribution Assessment for Federated Learning based on Cross-Round Valuation | [
"Hao Wu",
"Likun Zhang",
"Shucheng Li",
"Fengyuan Xu",
"Sheng Zhong"
] | Conference | poster | 2409.02495 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=k4GpJC8YHR | @inproceedings{
xia2024viactvideoenhanced,
title={Vi2{ACT}:Video-enhanced Cross-modal Co-learning with Representation Conditional Discriminator for Few-shot Human Activity Recognition},
author={Kang Xia and Wenzhong Li and Yimiao Shao and Sanglu Lu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=k4GpJC8YHR}
} | Human Activity Recognition (HAR) as an emerging research field has attracted widespread academic attention due to its wide range of practical applications in areas such as healthcare, environmental monitoring, and sports training. Given the high cost of annotating sensor data, many unsupervised and semi-supervised methods have been applied to HAR to alleviate the problem of limited data. In this paper, we propose a novel video-enhanced cross-modal collaborative learning method, Vi2ACT, to address the issue of few-shot HAR. We introduce a new data augmentation approach that utilizes a text-to-video generation model to generate class-related videos. Subsequently, a large quantity of video semantic representations are obtained through fine-tuning the video encoder for cross-modal co-learning. Furthermore, to effectively align video semantic representations and time series representations, we enhance HAR at the representation-level using conditional Generative Adversarial Nets (cGAN). We design a novel Representation Conditional Discriminator that is trained to assess samples as originating from video representations rather than those generated by the time series encoder as accurately as possible. We conduct extensive experiments on four commonly used HAR datasets. The experimental results demonstrate that our method outperforms other baseline models in all few-shot scenarios. | Vi2ACT:Video-enhanced Cross-modal Co-learning with Representation Conditional Discriminator for Few-shot Human Activity Recognition | [
"Kang Xia",
"Wenzhong Li",
"Yimiao Shao",
"Sanglu Lu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=jwuX7LktIH | @inproceedings{
lu2024hybridflow,
title={HybridFlow: Infusing Continuity into Masked Codebook for Extreme Low-Bitrate Image Compression},
author={Lei Lu and Yanyue Xie and Wei Jiang and Wei Wang and Xue Lin and Yanzhi Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jwuX7LktIH}
} | This paper investigates the challenging problem of learned image compression (LIC) with extreme low bitrates. Previous LIC methods based on transmitting quantized continuous features often yield blurry and noisy reconstruction due to the severe quantization loss. While previous LIC methods based on learned codebooks that discretize visual space usually give poor-fidelity reconstruction due to the insufficient representation power of limited codewords in capturing faithful details. We propose a novel dual-stream framework, HyrbidFlow, which combines the continuous-feature-based and codebook-based streams to achieve both high perceptual quality and high fidelity under extreme low bitrates. The codebook-based stream benefits from the high-quality learned codebook priors to provide high quality and clarity in reconstructed images. The continuous feature stream targets at maintaining fidelity details. To achieve the ultra low bitrate, a masked token-based transformer is further proposed, where we only transmit a masked portion of codeword indices and recover the missing indices through token generation guided by information from the continuous feature stream. We also develop a bridging correction network to merge the two streams in pixel decoding for final image reconstruction, where the continuous stream features rectify biases of the codebook-based pixel decoder to impose reconstructed fidelity details. Experimental results demonstrate superior performance across several datasets under extremely low bitrates, compared with existing single-stream codebook-based or continuous-feature-based LIC methods. | HybridFlow: Infusing Continuity into Masked Codebook for Extreme Low-Bitrate Image Compression | [
"Lei Lu",
"Yanyue Xie",
"Wei Jiang",
"Wei Wang",
"Xue Lin",
"Yanzhi Wang"
] | Conference | poster | 2404.13372 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=juMYrkJlV3 | @inproceedings{
li2024gslam,
title={{GS}\${\textasciicircum}\{3\}\${LAM}: Gaussian Semantic Splatting {SLAM}},
author={Linfei Li and Lin Zhang and Zhong Wang and Ying Shen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=juMYrkJlV3}
} | Recently, the multi-modal fusion of RGB, depth, and semantics has shown great potential in the domain of dense Simultaneous Localization and Mapping (SLAM), as konwn as dense semantic SLAM. Yet a prerequisite for generating consistent and continuous semantic maps is the availability of dense, efficient, and scalable scene representations. To date, existing semantic SLAM systems based on explicit scene representations (points/meshes/surfels) are limited by their resolutions and inabilities to predict unknown areas, thus failing to generate dense maps. Contrarily, a few implicit scene representations (Neural Radiance Fields) to deal with these problems rely on time-consuming ray tracing-based volume rendering technique, which cannot meet the real-time rendering requirements of SLAM. Fortunately, the Gaussian Splatting scene representation has recently emerged, which inherits the efficiency and scalability of point/surfel representations while smoothly represents geometric structures in a continuous manner, showing promise in addressing the aforementioned challenges. To this end, we propose $\textbf{GS$^3$LAM}$, a $\textbf{G}$aussian $\textbf{S}$emantic $\textbf{S}$platting $\textbf{SLAM}$ framework, which takes multimodal data as input and can render consistent, continuous dense semantic maps in real-time. To fuse multimodal data, GS$^3$LAM models the scene as a Semantic Gaussian Field (SG-Field), and jointly optimizes camera poses and the field by establishing error constraints between observed and predicted data. Furthermore, a Depth-adaptive Scale Regularization (DSR) scheme is proposed to tackle the problem of misalignment between scale-invariant Gaussians and geometric surfaces within the SG-Field. To mitigate the forgetting phenomenon, we propose an effective Random Sampling-based Keyframe Mapping (RSKM) strategy, which exhibits notable superiority over local covisibility optimization strategies commonly utilized in 3DGS-based SLAM systems. Extensive experiments conducted on the benchmark datasets reveal that compared with state-of-the-art competitors, GS$^3$LAM demonstrates increased tracking robustness, superior real-time rendering quality, and enhanced semantic reconstruction precision. To make the results reproducible, the source code will be publicly released. | GS^3LAM: Gaussian Semantic Splatting SLAM | [
"Linfei Li",
"Lin Zhang",
"Zhong Wang",
"Ying Shen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=jttrL7wHLC | @inproceedings{
shi2024passion,
title={{PASSION}: Towards Effective Incomplete Multi-Modal Medical Image Segmentation with Imbalanced Missing Rates},
author={Junjie Shi and Caozhi Shang and Zhaobin Sun and Li Yu and Xin Yang and Zengqiang Yan},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jttrL7wHLC}
} | Incomplete multi-modal image segmentation is a fundamental task in medical imaging to refine deployment efficiency when only partial modalities are available. However, the common practice that complete-modality data is visible during model training is far from realistic, as modalities can have imbalanced missing rates in clinical scenarios. In this paper, we, for the first time, formulate such a challenging setting and propose Preference-Aware Self-diStillatION (PASSION) for incomplete multi-modal medical image segmentation under imbalanced missing rates. Specifically, we first construct pixel-wise and semantic-wise self-distillation to balance the optimization objective of each modality. Then, we define relative preference to evaluate the dominance of each modality during training, based on which to design task-wise and gradient-wise regularization to balance the convergence rates of different modalities. Experimental results on two publicly available multi-modal datasets demonstrate the superiority of PASSION against existing approaches for modality balancing. More importantly, PASSION is validated to work as a plug-and-play module for consistent performance improvement across different backbones. Code will be available upon acceptance. | PASSION: Towards Effective Incomplete Multi-Modal Medical Image Segmentation with Imbalanced Missing Rates | [
"Junjie Shi",
"Caozhi Shang",
"Zhaobin Sun",
"Li Yu",
"Xin Yang",
"Zengqiang Yan"
] | Conference | oral | 2407.14796 | [
"https://github.com/jun-jie-shi/passion"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=jnQFcUU9Bw | @inproceedings{
zhou2024stealthdiffusion,
title={StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model},
author={Ziyin Zhou and Ke Sun and Zhongxi Chen and Huafeng Kuang and Xiaoshuai Sun and Rongrong Ji},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jnQFcUU9Bw}
} | The rapid progress in generative models has given rise to the critical task of AI-Generated Content Stealth (AIGC-S), which aims to create AI-generated images that can evade both forensic detectors and human inspection. This task is crucial for understanding the vulnerabilities of existing detection methods and developing more robust techniques. However, current adversarial attacks often introduce visible noise, have poor transferability, and fail to address spectral differences between AI-generated and genuine images.
To address this, we propose StealthDiffusion, a framework based on stable diffusion that modifies AI-generated images into high-quality, imperceptible adversarial examples capable of evading state-of-the-art forensic detectors. StealthDiffusion comprises two main components: Latent Adversarial Optimization, which generates adversarial perturbations in the latent space of stable diffusion, and Control-VAE, a module that reduces spectral differences between the generated adversarial images and genuine images without affecting the original diffusion model's generation process. Extensive experiments demonstrate the effectiveness of StealthDiffusion in both white-box and black-box settings, transforming AI-generated images into higher-quality adversarial forgeries with frequency spectra resembling genuine images. These images are classified as genuine by state-of-the-art forensic classifiers and are difficult for humans to distinguish. | StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model | [
"Ziyin Zhou",
"Ke Sun",
"Zhongxi Chen",
"Huafeng Kuang",
"Xiaoshuai Sun",
"Rongrong Ji"
] | Conference | poster | 2408.05669 | [
"https://github.com/wyczzy/stealthdiffusion"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=jn5NaOwOsH | @inproceedings{
wu2024decoupling,
title={Decoupling General and Personalized Knowledge in Federated Learning via Additive and Low-rank Decomposition},
author={Xinghao Wu and Xuefeng Liu and Jianwei Niu and Haolin Wang and Shaojie Tang and Guogang Zhu and Hao Su},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jn5NaOwOsH}
} | To address data heterogeneity, the key strategy of personalized Federated Learning (PFL) is to decouple general knowledge (shared among clients) and client-specific knowledge, as the latter can have a negative impact on collaboration if not removed. Existing PFL methods primarily adopt a parameter partitioning approach, where the parameters of a model are designated as one of two types: parameters shared with other clients to extract general knowledge and parameters retained locally to learn client-specific knowledge. However, as these two types of parameters are put together like a jigsaw puzzle into a single model during the training process, each parameter may simultaneously absorb both general and client-specific knowledge, thus struggling to separate the two types of knowledge effectively. In this paper, we introduce FedDecomp, a simple but effective PFL paradigm that employs parameter additive decomposition to address this issue. Instead of assigning each parameter of a model as either a shared or personalized one, FedDecomp decomposes each parameter into the sum of two parameters: a shared one and a personalized one, thus achieving a more thorough decoupling of shared and personalized knowledge compared to the parameter partitioning method. In addition, as we find that retaining local knowledge of specific clients requires much lower model capacity compared with general knowledge across all clients, we let the matrix containing personalized parameters be low rank during the training process. Moreover, a new alternating training strategy is proposed to further improve the performance. Experimental results across multiple datasets and varying degrees of data heterogeneity demonstrate that FedDecomp outperforms state-of-the-art methods up to 4.9\%. | Decoupling General and Personalized Knowledge in Federated Learning via Additive and Low-rank Decomposition | [
"Xinghao Wu",
"Xuefeng Liu",
"Jianwei Niu",
"Haolin Wang",
"Shaojie Tang",
"Guogang Zhu",
"Hao Su"
] | Conference | poster | 2406.19931 | [
"https://github.com/xinghaowu/feddecomp"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=jdgbGUGCDw | @inproceedings{
wang2024live,
title={Live On the Hump: Self Knowledge Distillation via Virtual Teacher-Students Mutual Learning},
author={Shuang Wang and Pengyi Hao and Fuli Wu and Cong Bai},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jdgbGUGCDw}
} | For solving the limitations of the current self knowledge distillation including never fully utilizing the knowledge of shallow exits and neglecting the impact of auxiliary exits' structure on the performance of network, a novel self knowledge distillation framework via virtual teacher-students mutual learning named LOTH is proposed in this paper. A knowledgeable virtual teacher is constructed from the rich feature maps of each exit to help the learning of each exit. Meanwhile, the logit knowledges of each exit are incorporated to guide the learning of the virtual teacher. They learn mutually through the well-designed loss in LOTH. Moreover, two kinds of auxiliary building
blocks are designed to balance the efficiency and effectiveness of network. Extensive experiments with diverse backbones on CIFAR-100 and Tiny-ImageNet validate the effectiveness of LOTH, which realizes superior performance with less resource by the comparison with the state-of-the-art distillation methods. The code of LOTH is available on Github. | Live On the Hump: Self Knowledge Distillation via Virtual Teacher-Students Mutual Learning | [
"Shuang Wang",
"Pengyi Hao",
"Fuli Wu",
"Cong Bai"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=jcUAyYC1Pu | @inproceedings{
chen2024disenstudio,
title={DisenStudio: Customized Multi-subject Text-to-Video Generation with Disentangled Spatial Control},
author={Hong Chen and Xin Wang and Yipeng Zhang and Yuwei Zhou and Zeyang Zhang and Siao Tang and Wenwu Zhu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jcUAyYC1Pu}
} | Generating customized content in videos has received increasing attention recently. However, existing works primarily focus on customized text-to-video generation for single subject, suffering from subject-missing and attribute-binding problems when being applied to multiple subjects. Furthermore, existing models struggle to assign the desired actions to the corresponding subjects (action-binding problem), failing to achieve satisfactory multi-subject generation performance. To tackle the problems, in this paper, we propose DisenStudio, a novel framework that can generate text-guided videos for customized multiple subjects, given few images for each subject. Specifically, DisenStudio enhances a pretrained diffusion-based text-to-video model with our proposed spatial-disentangled cross-attention mechanism to associate each subject with the desired action. Then the pretrained model is customized for the multiple subjects with the proposed motion-preserved disentangled finetuning, which involves three tuning strategies: multi-subject co-occurrence tuning, masked single-subject tuning, and multi-subject motion-preserved tuning. The first two strategies guarantee the subject occurrence and preserve their visual attributes, and the third strategy helps the model to maintain the temporal motion-generation ability when finetuning on static images. We conduct extensive experiments to demonstrate that our proposed DisenStudio significantly outperforms existing methods in various metrics. Additionally, we show that DisenStudio can be used as a powerful tool for various controllable generation applications. | DisenStudio: Customized Multi-subject Text-to-Video Generation with Disentangled Spatial Control | [
"Hong Chen",
"Xin Wang",
"Yipeng Zhang",
"Yuwei Zhou",
"Zeyang Zhang",
"Siao Tang",
"Wenwu Zhu"
] | Conference | poster | 2405.12796 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=jbnEOzIc8X | @inproceedings{
zhang2024modalitybalanced,
title={Modality-Balanced Learning for Multimedia Recommendation},
author={Jinghao Zhang and Guofan Liu and Qiang Liu and Shu Wu and Liang Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jbnEOzIc8X}
} | Multimedia content is of predominance in the modern Web era. Many recommender models have been proposed to investigate how to incorporate multimodal content information into traditional collaborative filtering framework effectively. The use of multimodal information is expected to provide more comprehensive information and lead to superior performance. However, the integration of multiple modalities often encounters the modal imbalance problem: since the information in different modalities is unbalanced, optimizing the same objective across all modalities leads to the under-optimization problem of the weak modalities with a slower convergence rate or lower performance. Even worse, we find that in multimodal recommendation models, all modalities suffer from the problem of insufficient optimization.
To address these issues, we propose a Counterfactual Knowledge Distillation (CKD) method which could solve the imbalance problem and make the best use of all modalities. Through modality-specific knowledge distillation, CKD could guide the multimodal model to learn modality-specific knowledge from uni-modal teachers. We also design a novel generic-and-specific distillation loss to guide the multimodal student to learn wider-and-deeper knowledge from teachers. Additionally, to adaptively recalibrate the focus of the multimodal model towards weaker modalities during training, we estimate the causal effect of each modality on the training objective using counterfactual inference techniques, through which we could determine the weak modalities, quantify the imbalance degree and re-weight the distillation loss accordingly.
Our method could serve as a plug-and-play module for both late-fusion and early-fusion backbones. Extensive experiments on six backbones show that our proposed method can improve the performance by a large margin. | Modality-Balanced Learning for Multimedia Recommendation | [
"Jinghao Zhang",
"Guofan Liu",
"Qiang Liu",
"Shu Wu",
"Liang Wang"
] | Conference | oral | 2408.06360 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=jb5zwjYO33 | @inproceedings{
ko2024referencebased,
title={Reference-based Burst Super-resolution},
author={Seonggwan Ko and Yeong Jun Koh and Donghyeon Cho},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jb5zwjYO33}
} | Burst super-resolution (BurstSR) utilizes signal information from multiple adjacent frames successively taken to restore rich textures. However, due to hand tremors and other image degradation factors, even recent BurstSR methods struggle to reconstruct finely textured images. On the other hand, reference-based super-resolution (RefSR) leverages the high-fidelity reference (Ref) image to recover detailed contents. Nevertheless, if there is no correspondence between the Ref and the low-resolution (LR) image, the degraded output is derived. To overcome the limitations of existing BurstSR and RefSR methods, we newly introduce a reference-based burst super-resolution (RefBSR) that utilizes burst frames and a high-resolution (HR) external Ref image. The RefBSR can restore the HR image by properly fusing the benefits of burst frames and a Ref image. To this end, we propose the first RefBSR framework that consists of Ref-burst feature matching and burst feature-aware Ref texture transfer (BRTT) modules. In addition, our method adaptively integrates features with better quality between Ref and burst features using Ref-burst adaptive feature fusion (RBAF). To train and evaluate our method, we provide a new dataset of Ref-burst pairs collected by commercial smartphones. The proposed method achieves state-of-the-art performance compared to both existing RefSR and BurstSR methods, and we demonstrate its effectiveness through comprehensive experiments. The source codes and the newly constructed dataset will be made publicly available for further research. | Reference-based Burst Super-resolution | [
"Seonggwan Ko",
"Yeong Jun Koh",
"Donghyeon Cho"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=jUeqWFsK1p | @inproceedings{
liu2024heromaker,
title={HeroMaker: Human-centric Video Editing with Motion Priors},
author={Shiyu Liu and Zibo Zhao and Yihao Zhi and Yiqun Zhao and Binbin Huang and Shuo Wang and Ruoyu Wang and Michael Xuan and Zhengxin Li and Shenghua Gao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jUeqWFsK1p}
} | Video generation and editing, particularly human-centric video editing, has seen a surge of interest in its potential to create immersive and dynamic content. A fundamental challenge is ensuring temporal coherence and visual harmony across frames, especially in handling large-scale human motion and maintaining consistency over long sequences. The previous methods, such as diffusion-based video editing, struggle with flickering and length limitations. In contrast, methods employing Video-2D representations grapple with accurately capturing complex structural relationships in large-scale human motion. Simultaneously, some patterns on the human body appear intermittently throughout the video, posing a knotty problem in identifying visual correspondence. To address the above problems, we present HeroMaker. This human-centric video editing framework manipulates the person's appearance within the input video and achieves inter-frame consistent results. Specifically, we propose to learn the motion priors, transformations from dual canonical fields to each video frame, by leveraging the body mesh-based human motion warping and neural deformation-based margin refinement in the video reconstruction framework to ensure the semantic correctness of canonical fields. HeroMaker performs human-centric video editing by manipulating the dual canonical fields and combining them with motion priors to synthesize temporally coherent and visually plausible results. Comprehensive experiments demonstrate that our approach surpasses existing methods regarding temporal consistency, visual quality, and semantic coherence. | HeroMaker: Human-centric Video Editing with Motion Priors | [
"Shiyu Liu",
"Zibo Zhao",
"Yihao Zhi",
"Yiqun Zhao",
"Binbin Huang",
"Shuo Wang",
"Ruoyu Wang",
"Michael Xuan",
"Zhengxin Li",
"Shenghua Gao"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=jTtfDitRAt | @inproceedings{
zhang2024poisoning,
title={Poisoning for Debiasing: Fair Recognition via Eliminating Bias Uncovered in Data Poisoning},
author={Yi Zhang and Zhefeng Wang and Rui Hu and Xinyu Duan and Yi ZHENG and Baoxing Huai and Jiarun Han and Jitao Sang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jTtfDitRAt}
} | Neural networks often tend to rely on bias features that have strong but spurious correlations with the target labels for decision-making, leading to poor performance on data that does not adhere to these correlations. Early debiasing methods typically construct an unbiased optimization objective based on the labels of bias features. Recent work assumes that bias label is unavailable and usually trains two models: a biased model to deliberately learn bias features for exposing data bias, and a target model to eliminate bias captured by the bias model. In this paper, we first reveal that previous biased models fit target labels, which resulted in failing to expose data bias. To tackle this issue, we propose poisoner, which utilizes data poisoning to embed the biases learned by biased models into the poisoned training data, thereby encouraging the models to learn more biases. Specifically, we couple data poisoning and model training to continuously prompt the biased model to learn more bias. By utilizing the biased model, we can identify samples in the data that contradict these biased correlations. Subsequently, we amplify the influence of these samples in the training of the target model to prevent the model from learning such biased correlations. Experiments show the superior debiasing performance of our method. | Poisoning for Debiasing: Fair Recognition via Eliminating Bias Uncovered in Data Poisoning | [
"Yi Zhang",
"Zhefeng Wang",
"Rui Hu",
"Xinyu Duan",
"Yi ZHENG",
"Baoxing Huai",
"Jiarun Han",
"Jitao Sang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=jTPIH2kN1B | @inproceedings{
li2024crossmodal,
title={Cross-modal Observation Hypothesis Inference},
author={Mengze Li and Kairong Han and Jiahe Xu and Yueying Li and Tao Wu and Zhou Zhao and Jiaxu Miao and Shengyu Zhang and Jingyuan Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jTPIH2kN1B}
} | Hypothesis inference, a sophisticated cognitive process that allows humans to construct plausible explanations for incomplete observations, is paramount to our ability to make sense of the world around us. Despite the universality of this skill, it remains under-explored within the context of multi-modal AI, which necessitates analyzing observation, recalling information in the mind, and generating explanations. In this work, we propose the Cross-modal Observation hypothesIs iNference task (COIN). Given a textual description of a partially observed event, COIN strives to recall the most probable event from the visual mind (video pool), and infer the subsequent action flow connecting the visual mind event and the observed textural event. To advance the development of this field, we propose a large-scale text-video dataset, Tex-COIN, that contains 39,796 meticulously annotated hypothesis inference examples and auxiliary commonsense knowledge (appearance, clothing, action, etc.) for key video characters. Based on the proposed Tex-COIN dataset, we design a strong baseline, COINNet, which features two perspectives: 1) aligning temporally displaced textual observations with target videos via transformer-based multi-task learning, and 2) inferring the action flow with non-parametric graph-based inference grounded in graph theory. Extensive experiments on the Tex-COIN dataset validate the effectiveness of our COINNet by significantly outperforming the state-of-the-arts. | Cross-modal Observation Hypothesis Inference | [
"Mengze Li",
"Kairong Han",
"Jiahe Xu",
"Yueying Li",
"Tao Wu",
"Zhou Zhao",
"Jiaxu Miao",
"Shengyu Zhang",
"Jingyuan Chen"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=jPpK9RzWvh | @inproceedings{
xue2024fewshot,
title={Few-Shot Multimodal Explanation for Visual Question Answering},
author={Dizhan Xue and Shengsheng Qian and Changsheng Xu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jPpK9RzWvh}
} | A key object in eXplainable Artificial Intelligence (XAI) is to create intelligent systems capable of reasoning and explaining real-world data to facilitate reliable decision-making. Recent studies have acknowledged the importance of providing user-friendly and verifiable explanations to facilitate trustworthy Visual Question Answering (VQA) systems. This paper aims to promote explainable VQA from both data and method perspectives. First, we propose a new Standard Multimodal Explanation (SME) dataset and a new Few-Shot Multimodal Explanation for VQA (FS-MEVQA) task, which aims to generate the multimodal explanation of the underlying reasoning process for solving visual questions with few training samples. Our SME dataset includes 1,028,230 samples composed of questions, images, answers, and multimodal explanations, which can facilitate the research in both traditional MEVQA and FS-MEVQA. To the best of our knowledge, this is the first large-scale dataset with joint language-vision explanations based on standard English and additional visual grounding tokens, which bridge MEVQA to a broad field in Natural Language Processing (NLP). Second, we propose a training-free Multimodal Explaining Agent (MEAgent) method based on an LLM agent with multimodal open-world tools to infer answers and generate multimodal explanations for visual questions. Our MEAgent can learn multimodal explaining from merely $N(=16)$ training samples and leverage open-world abilities to perform FS-MEVQA on test samples. Comprehensive experimental results evaluated by language quality metrics, visual detection metric, and visual attribution metrics on our SME dataset indicate the superiority of our method for FS-MEVQA, compared to state-of-the-art MEVQA methods and the multimodal LLM GPT-4V. | Few-Shot Multimodal Explanation for Visual Question Answering | [
"Dizhan Xue",
"Shengsheng Qian",
"Changsheng Xu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=jNKZRGPlIr | @inproceedings{
zhang2024overcoming,
title={Overcoming the Pitfalls of Vision-Language Model for Image-Text Retrieval},
author={Feifei Zhang and Sijia Qu and Fan Shi and Changsheng Xu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jNKZRGPlIr}
} | This work tackles the persistent challenge of image-text retrieval, a key problem at the intersection of computer vision and natural language processing. Despite significant advancements facilitated by large-scale Contrastive Language-Image Pretraining (CLIP) models, we found that existing methods fall short in bridging the fine-grained semantic gap between visual and textual representations, particularly in capturing the nuanced interplay of local visual details and the textual descriptions. To address the above challenges, we propose a general framework called Local and Generative-driven Modality Gap Correction (LG-MGC), which devotes to simultaneously enhancing representation learning and alleviating the modality gap in cross-modal retrieval. Specifically, the proposed model consists of two main components: a local-driven semantic completion module, which complements specific local context information that overlooked by traditional models within global features, and a generative-driven semantic translation module, which leverages generated features as a bridge to mitigate the modality gap. This framework not only tackles the granularity of semantic correspondence and improves the performance of existing methods without requiring additional trainable parameters, but is also designed to be plug-and-play, allowing for easy integration into existing retrieval models without altering their architectures. Extensive qualitative and quantitative experiments demonstrate the effectiveness of LG-MGC by achieving consistent state-of-the-art performance over strong baselines. \emph{\color{magenta}The code is included in the supplementary material.} | Overcoming the Pitfalls of Vision-Language Model for Image-Text Retrieval | [
"Feifei Zhang",
"Sijia Qu",
"Fan Shi",
"Changsheng Xu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=jNI2r9BGVC | @inproceedings{
wang2024dpcpnet,
title={3{DPCP}-Net: A Lightweight Progressive 3D Correspondence Pruning Network for Accurate and Efficient Point Cloud Registration},
author={Jingtao Wang and Zechao Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jNI2r9BGVC}
} | Accurately identifying correct correspondence (inlier) within initial ones is pivotal for robust feature-based point cloud registration. Current methods typically rely on one-shot 3D correspondence classification with a single coherence constraint to obtain inlier. These approaches are either insufficiently accurate or inefficient, often requiring more network parameters. To address this issue, we propose a lightweight network, 3DPCP-Net, for fast and robust registration. Its core design lies in progressive correspondence pruning through mining deep spatial geometric coherence, which can effectively learn pairwise 3D spatial distance and angular features to progressively remove outlier (mismatched correspondence) for accurate pose estimation. Moreover, we also propose an efficient feature-based hypothesis proposer that leverages the geometric consistency features to generate reliable model hypotheses for each reliable correspondence explicitly. Extensive experiments on 3DMatch, 3DLoMatch, KITTI and Augmented ICL-NUIM demonstrate the accurate and efficient of our method for outlier removal and pose estimation tasks. Furthermore, our method is highly versatile and can be easily integrated into both learning-based and geometry-based frameworks, enabling them to achieve state-of-the-art results. The code is provided in the supplementary materials. | 3DPCP-Net: A Lightweight Progressive 3D Correspondence Pruning Network for Accurate and Efficient Point Cloud Registration | [
"Jingtao Wang",
"Zechao Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=jLJ3htNxVX | @inproceedings{
ge2024consistencies,
title={Consistencies are All You Need for Semi-supervised Vision-Language Tracking},
author={Jiawei Ge and Jiuxin Cao and Xuelin Zhu and Xinyu Zhang and Chang Liu and Kun Wang and Bo Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jLJ3htNxVX}
} | Vision-Language Tracking (VLT) requires locating a specific target in video sequences, given a natural language prompt and an initial object box. Despite recent advancements, existing approaches heavily rely on expensive and time-consuming human annotations. To mitigate this limitation, directly generating pseudo labels from raw videos seems to be a straightforward solution; however, it inevitably introduces undesirable noise during the training process. Moreover, we insist that an efficient tracker should excel in tracking the target, regardless of the temporal direction. Building upon these insights, we propose the pioneering semi-supervised learning scheme for VLT task, representing a crucial step towards reducing the dependency on high-quality yet costly labeled data. Specifically, drawing inspiration from the natural attributes of a video (i.e., space, time, and semantics), our approach progressively leverages inherent consistencies from these aspects: (1) Spatially, each frame and any object cropped from it naturally form an image-bbox (bounding box) pair for self-training; (2) Temporally, bidirectional tracking trajectories should exhibit minimal differences; (3) Semantically, the correlation between visual and textual features is expected to remain consistent. Furthermore, the framework is validated with a simple yet effective tracker we devised, named ATTracker (Asymmetrical Transformer Tracker). It modifies the self-attention operation in an asymmetrical way, striving to enhance target-related features while suppressing noise. Extensive experiments confirm that our ATTracker serves as a robust baseline, outperforming fully supervised base trackers. By unveiling the potential of learning with limited annotations, this study aims to attract attention and pave the way for Semi-supervised Vision-Language Tracking (SS-VLT). | Consistencies are All You Need for Semi-supervised Vision-Language Tracking | [
"Jiawei Ge",
"Jiuxin Cao",
"Xuelin Zhu",
"Xinyu Zhang",
"Chang Liu",
"Kun Wang",
"Bo Liu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=jGNDRM2vul | @inproceedings{
zhu2024calibration,
title={Calibration for Long-tailed Scene Graph Generation},
author={XuHan Zhu and Yifei Xing and Ruiping Wang and Yaowei Wang and Xiangyuan Lan},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jGNDRM2vul}
} | Miscalibrated models tend to be unreliable and insecure for downstream applications. In this work, we attempt to highlight and remedy miscalibration in current scene graph generation (SGG) models, which has been overlooked by previous works. We discover that obtaining well-calibrated models for SGG is more challenging than conventional calibration settings, as long-tailed SGG training data exacerbates miscalibration with overconfidence in head classes and underconfidence in tail classes. We further analyze which components are explicitly impacted by the long-tailed data during optimization, thereby exacerbating miscalibration and unbalanced learning, including: \textbf{biased parameters}, \textbf{deviated boundaries}, and \textbf{distorted target distribution}. To address the above issues, we propose the \textbf{C}ompositional \textbf{O}ptimization \textbf{C}alibration (\textbf{COC}) method, comprising three modules: i. A parameter calibration module that utilizes a hyperspherical classifier to eliminate the bias introduced by biased parameters. ii. A boundary calibration module that disperses features of majority classes to consolidate the decision boundaries of minority classes and mitigate deviated boundaries. iii. A target distribution calibration module that addresses distorted target distribution, leverages within-triplet prior to guide confidence-aware and label-aware target calibration, and applies curriculum regulation to constrain learning focus from easy to hard classes. Extensive evaluation on popular benchmarks demonstrates the effectiveness of our proposed method in improving model calibration and resolving unbalanced learning for long-tailed SGG. Finally, our proposed method performs best on model calibration compared to different types of calibration methods and achieves state-of-the-art trade-off performance on balanced learning for SGG. The source codes and models will be available upon acceptance. | Calibration for Long-tailed Scene Graph Generation | [
"XuHan Zhu",
"Yifei Xing",
"Ruiping Wang",
"Yaowei Wang",
"Xiangyuan Lan"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=jFlt7lbZoB | @inproceedings{
yu2024cfdiffusion,
title={{CFD}iffusion: Controllable Foreground Relighting in Image Compositing via Diffusion Model},
author={Ziqi Yu and Jing Zhou and Zhongyun Bao and Gang Fu and Weilei He and Chao Liang and Chunxia Xiao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jFlt7lbZoB}
} | Inserting foreground objects into specific background scenes and eliminating the gap between them is an important and challenging task. It typically involves multiple processing tasks, such as image harmonization and shadow generation, which find numerous applications across various fields including computer vision and augmented reality. In these two domains, there are already many mature solutions, but they often only focus on one of the tasks. Some image composition methods can address both of these issues simultaneously but cannot guarantee complete reconstruction of foreground content. In this work,we propose CFDiffusion, which can handle both image harmonization and shadow generation simultaneously. Additionally, we introduce a foreground content enhancement module based on the diffusion model to ensure the complete preservation of foreground content at the insertion location. The experimental results on the iHarmony4 dataset and our self-created IH-SG dataset demonstrate the superiority of our CFDiffusion approach. | CFDiffusion: Controllable Foreground Relighting in Image Compositing via Diffusion Model | [
"Ziqi Yu",
"Jing Zhou",
"Zhongyun Bao",
"Gang Fu",
"Weilei He",
"Chao Liang",
"Chunxia Xiao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=jCswanMk5e | @inproceedings{
zou2024freqmamba,
title={FreqMamba: Viewing Mamba from a Frequency Perspective for Image Deraining},
author={Zhen Zou and Hu Yu and Jie Huang and Feng Zhao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=jCswanMk5e}
} | Images corrupted by rain streaks often lose vital frequency information for perception, and image deraining aims to solve this issue which relies on global and local degradation modeling.
Recent studies have witnessed the effectiveness and efficiency of Mamba for perceiving global and local information based on its exploiting local correlation among patches, however, rarely attempts have been explored to extend it with frequency analysis for image deraining, limiting its ability to perceive global degradation that is relevant to frequency modeling (e.g. Fourier transform).
In this paper, we propose FreqMamba, an effective and efficient paradigm that leverages the complementary between Mamba and frequency analysis for image deraining. The core of our method lies in extending Mamba with frequency analysis from two perspectives: extending it with frequency-band for exploiting frequency correlation, and connecting it with Fourier transform for global degradation modeling.
Specifically, FreqMamba introduces complementary triple interaction structures including spatial Mamba, frequency band Mamba, and Fourier global modeling. Frequency band Mamba decomposes the image into sub-bands of different frequencies to allow 2D scanning from the frequency dimension. Furthermore, leveraging Mamba's unique data-dependent properties, we use rainy images at different scales to provide degradation priors to the network, thereby facilitating efficient training. Extensive experiments show that our method outperforms state-of-the-art methods both visually and quantitatively. | FreqMamba: Viewing Mamba from a Frequency Perspective for Image Deraining | [
"Zhen Zou",
"Hu Yu",
"Jie Huang",
"Feng Zhao"
] | Conference | poster | 2404.09476 | [
"https://github.com/asleepytree/freqmamba"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=j5Zzb3Rgf0 | @inproceedings{
yu2024vishanfu,
title={VisHanfu: An Interactive System for the Promotion of Hanfu Knowledge via Cross-Shaped Flat Structure},
author={Minjing Yu and Lingzhi Zeng and Xinxin Du and Jenny Sheng and Qiantian Liao and Yong-jin Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=j5Zzb3Rgf0}
} | Hanfu is the representative traditional costume of Han nationality in China, which carries the outstanding craftsmanship of dyeing, weaving, and embroidery, and is of great significance to the inheritance of traditional culture. However, the existing methods of Hanfu publicity still have problems, which are not conducive to the inheritance of Hanfu culture. In this work, we developed the VisHanfu virtual reality system by focusing on the "Cross-Shaped Flat Structure", which is an integral feature of Hanfu. We have digitally restored five representative Hanfu historical artifacts and provided an interactive making experience. Combined with high realistic cloth simulation techniques, it allows users to interactively observe the movement effects of the Hanfu. The results of user experiments show that our system can provide a favorable experience for users, and bring a better learning effect, which helps users to enhance their interest in learning and thus contributes to the inheritance of Hanfu culture. | VisHanfu: An Interactive System for the Promotion of Hanfu Knowledge via Cross-Shaped Flat Structure | [
"Minjing Yu",
"Lingzhi Zeng",
"Xinxin Du",
"Jenny Sheng",
"Qiantian Liao",
"Yong-jin Liu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=j520wKxnf6 | @inproceedings{
zhao2024maskmentor,
title={MaskMentor: Unlocking the Potential of Masked Self-Teaching for Missing Modality {RGB}-D Semantic Segmentation},
author={Zhida Zhao and Jia Li and Lijun Wang and Yifan Wang and Huchuan Lu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=j520wKxnf6}
} | Existing RGB-D semantic segmentation methods struggle to handle modality missing input, where only RGB images or depth maps are available, leading to degenerated segmentation performance. We tackle this issue using MaskMentor, a new pre-training framework for modality missing segmentation, which advances its counterparts via two novel designs: Masked Modality and Image Modeling (M2IM), and Self-Teaching via Token-Pixel Joint reconstruction (STTP). M2IM simulates modality missing scenarios by combining both modality- and patch-level random masking. Meanwhile, STTP offers an effective self-teaching strategy, where the trained network assumes a dual role, simultaneously acting as both the teacher and the student. The student with modality missing input is supervised by the teacher with complete modality input through both token- and pixel-wise masked modeling, closing the gap between missing and complete input modalities. By integrating M2IM and STTP, MaskMentor significantly improves the generalization ability of the trained model across diverse input conditions, and outperforms state-of-the-art methods on two popular benchmarks by a considerable margin. Extensive ablation studies further verify the effectiveness of the above contributions. | MaskMentor: Unlocking the Potential of Masked Self-Teaching for Missing Modality RGB-D Semantic Segmentation | [
"Zhida Zhao",
"Jia Li",
"Lijun Wang",
"Yifan Wang",
"Huchuan Lu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=j3mY13CNrb | @inproceedings{
han2024gait,
title={Gait Recognition in Large-scale Free Environment via Single Li{DAR}},
author={Xiao Han and Yiming Ren and Peishan Cong and Yujing Sun and Jingya Wang and Lan Xu and Yuexin Ma},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=j3mY13CNrb}
} | Human gait recognition is crucial in multimedia, enabling identification through walking patterns without direct interaction, enhancing the integration across various media forms in real-world applications like smart homes, healthcare and non-intrusive security. LiDAR's ability to capture depth makes it pivotal for robotic perception and holds promise for real-world gait recognition. In this paper, based on a single LiDAR, we present the Hierarchical Multi-representation Feature Interaction Network (HMRNet) for robust gait recognition. Prevailing LiDAR-based gait datasets primarily derive from controlled settings with predefined trajectory, remaining a gap with real-world scenarios. To facilitate LiDAR-based gait recognition research, we introduce FreeGait, a comprehensive gait dataset from large-scale, unconstrained settings, enriched with multi-modal and varied 2D/3D data. Notably, our approach achieves state-of-the-art performance on prior dataset (SUSTech1K) and on FreeGait. Code and dataset will be released upon publication of this paper. | Gait Recognition in Large-scale Free Environment via Single LiDAR | [
"Xiao Han",
"Yiming Ren",
"Peishan Cong",
"Yujing Sun",
"Jingya Wang",
"Lan Xu",
"Yuexin Ma"
] | Conference | oral | 2211.12371 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=j3EoXmMum8 | @inproceedings{
yao2024edit,
title={Edit As You Wish: Video Caption Editing with Multi-grained User Control},
author={Linli Yao and Yuanmeng Zhang and Ziheng Wang and Xinglin Hou and Tiezheng Ge and Yuning Jiang and Xu Sun and Qin Jin},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=j3EoXmMum8}
} | Automatically narrating videos in natural language complying with user requests, i.e. Controllable Video Captioning task, can help people manage massive videos with desired intentions. However, existing works suffer from two shortcomings: 1) the control signal is single-grained which can not satisfy diverse user intentions; 2) the video description is generated in a single round which can not be further edited to meet dynamic needs. In this paper, we propose a novel Video Caption Editing (VCE) task to automatically revise an existing video description guided by multi-grained user requests. Inspired by human writing-revision habits, we design the user command as a pivotal triplet {operation, position, attribute} to cover diverse user needs from coarse-grained to fine-grained. To facilitate the VCE task, we automatically construct an open-domain benchmark dataset named VATEX-EDIT and manually collect an e-commerce dataset called EMMAD-EDIT. Further, we propose a specialized small-scale model (i.e., OPA) compared with two generalist Large Multi-modal Models to perform an exhaustive analysis of the novel task. For evaluation, we adopt comprehensive metrics encompassing caption fluency, command-caption consistency, and video-caption alignment. Experiments reveal the task challenges of fine-grained multi-modal semantics understanding and processing. Our datasets, codes, and evaluation tools are ready to be open-sourced. | Edit As You Wish: Video Caption Editing with Multi-grained User Control | [
"Linli Yao",
"Yuanmeng Zhang",
"Ziheng Wang",
"Xinglin Hou",
"Tiezheng Ge",
"Yuning Jiang",
"Xu Sun",
"Qin Jin"
] | Conference | poster | 2305.08389 | [
"https://github.com/yaolinli/vce"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=j2M0047AQA | @inproceedings{
li2024dual,
title={Dual Advancement of Representation Learning and Clustering for Sparse and Noisy Images},
author={Wenlin Li and Yucheng Xu and Xiaoqing Zheng and Suoya Han and Jun Wang and Xiaobo Sun},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=j2M0047AQA}
} | Sparse and noisy images (SNIs), like those in spatial gene expression data, pose significant challenges for effective representation learning and clustering, which are essential for thorough data analysis and interpretation. In response to these challenges, we propose $\textbf{D}$ual $\textbf{A}$dvancement of $\textbf{R}$epresentation $\textbf{L}$earning and $\textbf{C}$lustering ($\textit{\textbf{DARLC}}$), an innovative framework that leverages contrastive learning to enhance the representations derived from masked image modeling. Simultaneously, $\textit{DARLC}$ integrates cluster assignments in a cohesive, end-to-end approach. This integrated clustering strategy addresses the ``class collision problem'' inherent in contrastive learning, thus improving the quality of the resulting representations. To generate more plausible positive views for contrastive learning, we employ a graph attention network-based technique that produces denoised images as augmented data. As such, our framework offers a comprehensive approach that improves the learning of representations by enhancing their local perceptibility, distinctiveness, and the understanding of relational semantics. Furthermore, we utilize a Student's t mixture model to achieve more robust and adaptable clustering of SNIs. Extensive evaluation on 12 real-world datasets of SNIs, representing spatial gene expressions, demonstrat $\textit{DARLC}$'s superiority over current state-of-the-art methods in both image clustering and generating representations that accurately reflect biosemantics content and gene interactions. | Dual Advancement of Representation Learning and Clustering for Sparse and Noisy Images | [
"Wenlin Li",
"Yucheng Xu",
"Xiaoqing Zheng",
"Suoya Han",
"Jun Wang",
"Xiaobo Sun"
] | Conference | poster | 2409.01781 | [
"https://github.com/zipging/darlc"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=ilAV4kFBrM | @inproceedings{
hao2024primkd,
title={Prim{KD}: Primary Modality Guided Multimodal Fusion for {RGB}-D Semantic Segmentation},
author={Zhiwei Hao and Zhongyu Xiao and Yong Luo and Jianyuan Guo and Jing Wang and Li Shen and Han Hu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=ilAV4kFBrM}
} | The recent advancements in cross-modal transformers have demonstrated their superior performance in RGB-D segmentation tasks by effectively integrating information from both RGB and depth modalities. However, existing methods often overlook the varying levels of informative content present in each modality, treating them equally and using models of the same architecture. This oversight can potentially hinder segmentation performance, especially considering that RGB images typically contain significantly more information than depth images. To address this issue, we propose PrimKD, a knowledge distillation based approach that focuses on guided multimodal fusion, with an emphasis on leveraging the primary RGB modality. In our approach, we utilize a model trained exclusively on the RGB modality as the teacher, guiding the learning process of a student model that fuses both RGB and depth modalities.
To prioritize information from the primary RGB modality while leveraging the depth modality, we incorporate primary focused feature reconstruction and a selective alignment scheme. This integration enhances the overall freature fusion, resulting in improved segmentation results.
We evaluate our proposed method on the NYU Depth V2 and SUN-RGBD datasets, and the experimental results demonstrate the effectiveness of PrimKD. Specifically, our approach achieves mIoU scores of 57.8 and 52.5 on these two datasets, respectively, surpassing existing counterparts by 1.5 and 0.4 mIoU. | PrimKD: Primary Modality Guided Multimodal Fusion for RGB-D Semantic Segmentation | [
"Zhiwei Hao",
"Zhongyu Xiao",
"Yong Luo",
"Jianyuan Guo",
"Jing Wang",
"Li Shen",
"Han Hu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=igTm7pX1Fh | @inproceedings{
shen2024neural,
title={Neural Interaction Energy for Multi-Agent Trajectory Prediction},
author={Kaixin Shen and Ruijie Quan and Linchao Zhu and Jun Xiao and Yi Yang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=igTm7pX1Fh}
} | Maintaining temporal stability is crucial in multi-agent trajectory prediction. Insufficient regularization to uphold this stability often results in fluctuations in kinematic states, leading to inconsistent predictions and the amplification of errors. In this study, we introduce a framework called Multi-Agent Trajectory prediction via neural interaction Energy (MATE). This framework assesses the interactive motion of agents by employing neural interaction energy, which captures the dynamics of interactions and illustrates their influence on the future trajectories of agents. To bolster temporal stability, we introduce two constraints: inter-agent interaction constraint and intra-agent motion constraint. These constraints work together to ensure temporal stability at both the system and agent levels, effectively mitigating prediction fluctuations inherent in multi-agent systems. Comparative evaluations against previous methods on four diverse datasets highlight the superior prediction accuracy and generalization capabilities of our model. We will release our code. | Neural Interaction Energy for Multi-Agent Trajectory Prediction | [
"Kaixin Shen",
"Ruijie Quan",
"Linchao Zhu",
"Jun Xiao",
"Yi Yang"
] | Conference | poster | 2404.16579 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=ifKQnupnP9 | @inproceedings{
gu2024utilizing,
title={Utilizing Speaker Profiles for Impersonation Audio Detection},
author={Hao Gu and Jiangyan Yi and Chenglong Wang and Yong Ren and Jianhua Tao and Xinrui Yan and Yujie Chen and Xiaohui Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=ifKQnupnP9}
} | Fake audio detection is an emerging active topic. A growing number of literatures have aimed to detect fake utterance, which are mostly generated by Text-to-speech (TTS) or voice conversion (VC). However, countermeasures against impersonation remains an underexplored area. Impersonation is a fake type that involves an imitator replicating specific traits and speech style of a target speaker. Unlike TTS and VC, which often leave digital traces or signal artifacts, impersonation involves live human beings producing entirely natural speech, rendering the detection of impersonation audio a challenging task. Thus, we propose a novel method that integrates speaker profiles into the process of impersonation audio detection. Speaker profiles are inherent characteristics that are challenging for impersonators to mimic accurately, such as speaker's age, job. We aim to leverage these features to extract discriminative information for detecting impersonation audio. Moreover, there is no large impersonated speech corpora available for quantitative study of impersonation impacts. To address this gap, we further design the first large-scale, diverse-speaker Chinese impersonation dataset, named ImPersonation Audio Detection (IPAD), to advance the community's research on impersonation audio detection. We evaluate several existing fake audio detection methods on our proposed dataset IPAD, demonstrating its necessity and the challenges. Additionally, our findings reveal that incorporating speaker profiles can significantly enhance the model's performance in detecting impersonation audio. | Utilizing Speaker Profiles for Impersonation Audio Detection | [
"Hao Gu",
"Jiangyan Yi",
"Chenglong Wang",
"Yong Ren",
"Jianhua Tao",
"Xinrui Yan",
"Yujie Chen",
"Xiaohui Zhang"
] | Conference | poster | 2408.17009 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=iet72kK15Q | @inproceedings{
li2024reformeval,
title={ReForm-Eval: Evaluating Large Vision Language Models via Unified Re-Formulation of Task-Oriented Benchmarks},
author={Zejun Li and Ye Wang and Mengfei Du and Qingwen Liu and Binhao Wu and Jiwen Zhang and Chengxing Zhou and Zhihao Fan and Jie Fu and Jingjing Chen and zhongyu wei and Xuanjing Huang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=iet72kK15Q}
} | Recent years have witnessed remarkable progress in the development of large vision-language models (LVLMs). Benefiting from the strong language backbones and efficient cross-modal alignment strategies, LVLMs exhibit surprising capabilities to perceive visual signals and perform visually grounded reasoning. However, the capabilities of LVLMs have not been comprehensively and quantitatively evaluated. Most existing multi-modal benchmarks require task-oriented input-output formats, posing great challenges to automatically assess the free-form text output of LVLMs. To effectively leverage the annotations available and reduce the manual efforts required for constructing new benchmarks, we propose to re-formulate existing benchmarks into unified LVLM-compatible formats. Through systematic data collection and reformulation, we present ReForm-Eval benchmark, offering substantial data for evaluating various capabilities of LVLMs. Through extensive experiments and analysis in ReForm-Eval, we demonstrate the comprehensiveness and reliability of ReForm-Eval in assessing various LVLMs. Our benchmark and evaluation framework will be open-sourced as a cornerstone for advancing the development of LVLMs. | ReForm-Eval: Evaluating Large Vision Language Models via Unified Re-Formulation of Task-Oriented Benchmarks | [
"Zejun Li",
"Ye Wang",
"Mengfei Du",
"Qingwen Liu",
"Binhao Wu",
"Jiwen Zhang",
"Chengxing Zhou",
"Zhihao Fan",
"Jie Fu",
"Jingjing Chen",
"zhongyu wei",
"Xuanjing Huang"
] | Conference | poster | 2310.02569 | [
"https://github.com/fudandisc/reform-eval"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=ibEaSS6bQn | @inproceedings{
wang2024eviledit,
title={EvilEdit: Backdooring Text-to-Image Diffusion Models in One Second},
author={Hao Wang and Shangwei Guo and Jialing He and Kangjie Chen and Shudong Zhang and Tianwei Zhang and Tao Xiang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=ibEaSS6bQn}
} | Text-to-image (T2I) diffusion models enjoy great popularity and many individuals and companies build their applications based on publicly released T2I diffusion models. Previous studies have demonstrated that backdoor attacks can elicit T2I diffusion models to generate unsafe target images through textual triggers. However, existing backdoor attacks typically demand substantial tuning data for poisoning, limiting their practicality and potentially degrading the overall performance of T2I diffusion models. To address these issues, we propose EvilEdit, a **training-free** and **data-free** backdoor attack against T2I diffusion models. EvilEdit directly edits the projection matrices in the cross-attention layers to achieve projection alignment between a trigger and the corresponding backdoor target. We preserve the functionality of the backdoored model using a protected whitelist to ensure the semantic of non-trigger words is not accidentally altered by the backdoor. We also propose a visual target attack EvilEdit$_{VTA}$, enabling adversaries to use specific images as backdoor targets. We conduct empirical experiments on Stable Diffusion and the results demonstrate that the EvilEdit can backdoor T2I diffusion models within **one second** with up to 100% success rate. Furthermore, our EvilEdit modifies only 2.2% of the parameters and maintains the model’s performance on benign prompts. Our code is available at [https://github.com/haowang-cqu/EvilEdit](https://github.com/haowang-cqu/EvilEdit). | EvilEdit: Backdooring Text-to-Image Diffusion Models in One Second | [
"Hao Wang",
"Shangwei Guo",
"Jialing He",
"Kangjie Chen",
"Shudong Zhang",
"Tianwei Zhang",
"Tao Xiang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=iYyxenA9gA | @inproceedings{
tian2024locplan,
title={Loc4Plan: Locating Before Planning for Outdoor Vision and Language Navigation},
author={Huilin Tian and Jingke Meng and Wei-Shi Zheng and Yuan-Ming Li and Junkai Yan and Yunong Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=iYyxenA9gA}
} | Vision and Language Navigation (VLN) is a challenging task that requires agents to understand instructions and navigate to the destination in a visual environment. One of the key challenges in outdoor VLN is keeping track of which part of the instruction was completed. To alleviate this problem, previous works mainly focus on grounding the natural language to the visual input, but neglecting the crucial role of the agent’s spatial position information in the grounding process. In this work, we first explore the substantial effect of spatial position locating on the grounding of outdoor VLN, drawing inspiration from human navigation. In real-world navigation scenarios, before planning a path to the destination, humans typically need to figure out their current location. This observation underscores the pivotal role of spatial localization in the navigation process. In this work, we introduce a novel framework, Locating before Planning (Loc4Plan), designed to incorporate spatial perception for action planning in outdoor VLN tasks. The main idea behind Loc4Plan is to perform the spatial localization before planning a decision action based on corresponding guidance, which comprises a block-aware spatial locating (BAL) module and a spatial-aware action planning (SAP) module. Specifically, to help the agent perceive its spatial location in the environment, we propose to learn a position predictor that measures how far the agent is from the next intersection for reflecting its position, which is achieved by the BAL module. After the locating process, we propose the SAP module to incorporate spatial information to ground the corresponding guidance and enhance the precision of action planning. Extensive experiments on the Touchdown and map2seq datasets show that the proposed Loc4Plan outperforms the SOTA methods. | Loc4Plan: Locating Before Planning for Outdoor Vision and Language Navigation | [
"Huilin Tian",
"Jingke Meng",
"Wei-Shi Zheng",
"Yuan-Ming Li",
"Junkai Yan",
"Yunong Zhang"
] | Conference | oral | 2408.05090 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=iWSVl6mLbW | @inproceedings{
chen2024fodfom,
title={FodFoM: Fake Outlier Data by Foundation Models Creates Stronger Visual Out-of-Distribution Detector},
author={Jiankang Chen and Ling Deng and Zhiyong Gan and Wei-Shi Zheng and Ruixuan Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=iWSVl6mLbW}
} | Out-of-Distribution (OOD) detection is crucial when deploying machine learning models in open-world applications. The core
challenge in OOD detection is mitigating the model’s overconfidence on OOD data. While recent methods using auxiliary outlier
datasets or synthesizing outlier features have shown promising OOD detection performance, they are limited due to costly data col-
lection or simplified assumptions. In this paper, we propose a novel OOD detection framework FodFoM that innovatively combines
multiple foundation models to generate two types of challenging fake outlier images for classifier training. The first type is based
on BLIP-2’s image captioning capability, CLIP’s vision-language knowledge, and Stable Diffusion’s image generation ability. Jointly
utilizing these foundation models constructs fake outlier images which are semantically similar to but different from in-distribution
(ID) images. For the second type, GroundingDINO’s object detection ability is utilized to help construct pure background images by blur-
ring foreground ID objects in ID images. The proposed framework can be flexibly combined with multiple existing OOD detection
methods. Extensive empirical evaluations show that image classifiers with the help of constructed fake images can more accurately
differentiate real OOD image from ID ones. New state-of-the-art OOD detection performance is achieved on multiple benchmarks.
The source code will be publicly released. | FodFoM: Fake Outlier Data by Foundation Models Creates Stronger Visual Out-of-Distribution Detector | [
"Jiankang Chen",
"Ling Deng",
"Zhiyong Gan",
"Wei-Shi Zheng",
"Ruixuan Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=iUrEqR14Au | @inproceedings{
wang2024uniyolo,
title={Uni-{YOLO}: Vision-Language Model-Guided {YOLO} for Robust and Fast Universal Detection in the Open World},
author={Xudong Wang and Weihong Ren and Xi'ai Chen and Huijie Fan and Yandong Tang and Zhi Han},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=iUrEqR14Au}
} | Universal object detectors aim to detect any object in any scene without human annotation, exhibiting superior generalization. However, the current universal object detectors show degraded performance in harsh weather, and their insufficient real-time capabilities limit their application. In this paper, we present Uni-YOLO, a universal detector designed for complex scenes with real-time performance. Uni-YOLO is a one-stage object detector that uses general object confidence to distinguish between objects and backgrounds, and employs a grid cell regression method for real-time detection. To improve its robustness in harsh weather conditions, the input of Uni-YOLO is adaptively enhanced with a physical model-based enhancement module. During training and inference, Uni-YOLO is guided by the extensive knowledge of the vision-language model CLIP. An object augmentation method is proposed to improve generalization in training by utilizing multiple source datasets with heterogeneous annotations. Furthermore, an online self-enhancement method is proposed to allow Uni-YOLO to further focus on specific objects through self-supervised fine-tuning in a given scene. Extensive experiments on public benchmarks and a UAV deployment are conducted to validate its superiority and practical value. | Uni-YOLO: Vision-Language Model-Guided YOLO for Robust and Fast Universal Detection in the Open World | [
"Xudong Wang",
"Weihong Ren",
"Xi'ai Chen",
"Huijie Fan",
"Yandong Tang",
"Zhi Han"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=iR2dplzipX | @inproceedings{
zhong2024a,
title={A Lightweight Multi-domain Multi-attention Progressive Network for Single Image Deraining},
author={Junliu zhong and Li Zhiyi and Dan Xiang and Maotang Han and Changsheng Li and gan yanfen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=iR2dplzipX}
} | Currently, the information processing in a spatial domain alone has intrinsic limitations that hinder the deep network’s effectiveness (performance) improvement in a single image deraining. Moreover, the deraining networks' structures and learning processes are becoming increasingly intricate, leading to challenges in structural lightweight, and training and testing efficiency. We propose a lightweight multi-domain multi-attention progressive network (M2PN) to handle these challenges. For performance improvement, the M2PN backbone applies a simple progressive CNN-based structure consisting of the S same recursive M2PN modules. This recursive backbone with a skip connection mechanism allows for better gradient flow and helps to effectively capture low-to-high-level/scales spatial features in progressive structure to improve contextual information acquisition. To further complement acquired spatial information for better deraining, we conduct spectral analysis on the frequency energy distribution of rain steaks, and theoretically present the relationship between the spectral bandwidths and the unique falling characteristics and special morphology of rain steaks. We present the frequency-channel attention (FcA) mechanism and the spatial-channel attention (ScA) mechanism to fuse frequency-channel features and spatial features better to distinguish and remove rain steaks. The simple recursive network structure and effective multi-domain multi-attention mechanism serve as the M2PN to achieve superior performance and facilitate fast convergence during training. Furthermore, the M2PN structure, with a small network component quantity, shallow network channels, and few convolutional kernels, requires only 168K parameters, which is 1 to 2 orders of magnitude lower than the existing SOTA networks. The experimental results demonstrate that even with such a few network parameters, M2PN still achieves the best overall performance. | A Lightweight Multi-domain Multi-attention Progressive Network for Single Image Deraining | [
"Junliu zhong",
"Li Zhiyi",
"Dan Xiang",
"Maotang Han",
"Changsheng Li",
"gan yanfen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=i7vrwAblCz | @inproceedings{
song2024eegmacs,
title={{EEG}-{MACS}: Manifold Attention and Confidence Stratification for {EEG}-based Cross-Center Brain Disease Diagnosis under Unreliable Annotations},
author={Zhenxi Song and Ruihan Qin and Huixia Ren and Zhen Liang and Yi Guo and Min zhang and Zhiguo Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=i7vrwAblCz}
} | Cross-center data heterogeneity and annotation unreliability significantly challenge the intelligent diagnosis of diseases using brain signals. A notable example is the EEG-based diagnosis of neurodegenerative diseases, which features subtler abnormal neural dynamics typically observed in small-group settings. To advance this area, in this work, we introduce a transferable framework employing **M**anifold **A**ttention and **C**onfidence **S**tratification (**MACS**) to diagnose neurodegenerative disorders based on EEG signals sourced from four centers with unreliable annotations. The MACS framework’s effectiveness stems from these features: 1) The _**Augmentor**_ generates various EEG-represented brain variants to enrich the data space; 2) The _**Switcher**_ enhances the feature space for trusted samples and reduces overfitting on incorrectly labeled samples; 3) The _**Encoder**_ uses the Riemannian manifold and Euclidean metrics to capture spatiotemporal variations and dynamic synchronization in EEG; 4) The _**Projector**_, equipped with dual heads, monitors consistency across multiple brain variants and ensures diagnostic accuracy; 5) The _**Stratifier**_ adaptively stratifies learned samples by confidence levels throughout the training process; 6) Forward and backpropagation in **MACS** are constrained by confidence stratification to stabilize the learning system amid unreliable annotations. Our subject-independent experiments, conducted on both neurocognitive and movement disorders using cross-center corpora, have demonstrated superior performance compared to existing related algorithms. This work not only improves EEG-based diagnostics for cross-center and small-setting brain diseases but also offers insights into extending MACS techniques to other data analyses, tackling data heterogeneity and annotation unreliability in multimedia and multimodal content understanding. We have released our code here: https://anonymous.4open.science/r/EEG-Disease-MACS-0B4A. | EEG-MACS: Manifold Attention and Confidence Stratification for EEG-based Cross-Center Brain Disease Diagnosis under Unreliable Annotations | [
"Zhenxi Song",
"Ruihan Qin",
"Huixia Ren",
"Zhen Liang",
"Yi Guo",
"Min zhang",
"Zhiguo Zhang"
] | Conference | oral | 2405.00734 | [
"https://github.com/ici-bci/eeg-disease-macs"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=i4LEfrFPB4 | @inproceedings{
zhang2024crossview,
title={Cross-View Consistency Regularisation for Knowledge Distillation},
author={Weijia Zhang and Dongnan Liu and Weidong Cai and Chao Ma},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=i4LEfrFPB4}
} | Knowledge distillation (KD) is an established paradigm for transferring privileged knowledge from a cumbersome model to a more lightweight and efficient one. In recent years, logit-based KD methods are quickly catching up in performance with their feature-based counterparts. However, existing research has pointed out that logit-based methods are still fundamentally limited by two major issues in their training process, namely overconfident teacher and confirmation bias. Inspired by the success of cross-view learning in fields such as semi-supervised learning, in this work we introduce within-view and cross-view regularisations to standard logit-based distillation frameworks to combat the above cruxes. We also perform confidence-based soft label selection to improve the quality of distilling signals from the teacher, which further mitigates the confirmation bias problem. Despite its apparent simplicity, the proposed Consistency-Regularisation-based Logit Distillation (CRLD) significantly boosts student learning, setting new state-of-the-art results on the standard CIFAR-100, Tiny-ImageNet, and ImageNet datasets across a diversity of teacher and student architectures, whilst introducing no extra network parameters. Orthogonal to on-going logit-based distillation research, our method enjoys excellent generalisation properties and, without bells and whistles, boosts the performance of various existing approaches by considerable margins. Our code and models will be released. | Cross-View Consistency Regularisation for Knowledge Distillation | [
"Weijia Zhang",
"Dongnan Liu",
"Weidong Cai",
"Chao Ma"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=i4JNVhM9Tk | @inproceedings{
du2024cbnet,
title={{CBN}et: Cooperation-Based Weakly Supervised Polyp Detection},
author={Xiuquan Du and Jiajia Chen and XuejunZhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=i4JNVhM9Tk}
} | Missed polyps are the major risk factor for colorectal cancer. To minimize misdiagnosis, many methods have been developed. However,
they either rely on laborious instance-level annotations, require labeling of prompt points, or lack the ability to filter noise proposals
and detect polyps integrally, resulting in severe challenges in this area. In this paper, we propose a novel Cooperation-Based network
(CBNet), a two-stage polyp detection framework supervised by image labels that removes wrong proposals through classification
in collaboration with segmentation and obtains a more accurate detector by aggregating adaptive multi-level regional features. Specifically, we conduct a Cooperation-Based Region Proposal Network (CBRPN) to reduce the negative impact of noises by deleting proposals without polyps, enabling our network to capture polyp features. Moreover, to enhance location integrity and classification precision of polyps, we aggregate multi-level region of interest (ROI) features under the guidance of the backbone classification layer, namely Adaptive ROI Fusion Module (ARFM). Extensive experiments on the public and private datasets show that our method achieves stateof-the-art performance for weakly supervised methods and even outperforms full supervision in some terms. All code is available at https://github.com/dxqllp/CBNet. | CBNet: Cooperation-Based Weakly Supervised Polyp Detection | [
"Xiuquan Du",
"Jiajia Chen",
"XuejunZhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=i3VP9dnBOx | @inproceedings{
song2024autogenic,
title={Autogenic Language Embedding for Coherent Point Tracking},
author={Zikai Song and Ying Tang and Run Luo and Lintao Ma and Junqing Yu and Yi-Ping Phoebe Chen and Wei Yang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=i3VP9dnBOx}
} | Point tracking is a challenging task in computer vision, aiming to establish point-wise correspondence across long video sequences. Recent advancements have primarily focused on temporal modeling techniques to improve local feature similarity, often overlooking the valuable semantic consistency inherent in tracked points. In this paper, we introduce a novel approach leveraging language embeddings to enhance the coherence of frame-wise visual features related to the same object. We recognize that videos typically involve a limited number of objects with specific semantics, allowing us to automatically learn language embeddings. Our proposed method, termed autogenic language embedding for visual feature enhancement, strengthens point correspondence in long-term sequences. Unlike existing visual-language schemes, our approach learns text embeddings from visual features through a dedicated mapping network, enabling seamless adaptation to various tracking tasks without explicit text annotations. Additionally, we introduce a consistency decoder that efficiently integrates text tokens into visual features with minimal computational overhead. Through enhanced visual consistency, our approach significantly improves point tracking trajectories in lengthy videos with substantial appearance variations. Extensive experiments on widely-used point tracking benchmarks demonstrate the superior performance of our method, showcasing notable enhancements compared to trackers relying solely on visual cues. | Autogenic Language Embedding for Coherent Point Tracking | [
"Zikai Song",
"Ying Tang",
"Run Luo",
"Lintao Ma",
"Junqing Yu",
"Yi-Ping Phoebe Chen",
"Wei Yang"
] | Conference | poster | 2407.20730 | [
"https://github.com/skyesong38/altrack"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=hwieLHhkRr | @inproceedings{
jiang2024in,
title={In Situ 3D Scene Synthesis for Ubiquitous Embodied Interfaces},
author={Haiyan Jiang and Song Leiyu and dongdong weng and Zhe Sun and Li Huiying and Xiaonuo Dongye and Zhenliang Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=hwieLHhkRr}
} | Virtual reality (VR) provides an interface to access virtual environments anytime and anywhere, allowing us to experience and interact with an immersive virtual world. It has been widely used in various fields, such as entertainment, training, and education. However, the user's body cannot be separated from the physical world. When users are immersed in virtual scenes, they encounter safety and immersion issues caused by physical objects in the surrounding environment. Although virtual scene synthesis has attracted widespread attention, many popular methods are limited to generating purely virtual scenes independent of the physical environment or simply mapping all physical objects as obstacles. To this end, we propose a scene agent that synthesizes situated 3D virtual scenes as a kind of ubiquitous embodied interface in VR for users. The scene agent synthesizes scenes by perceiving the user's physical environment as well as inferring the user's demands. The synthesized scenes maintain the affordances of the physical environment, enabling immersive users to interact with the physical environment and improving the user's sense of security. Meanwhile, the synthesized scenes maintain the style described by the user, improving the user's immersion. The comparison results show that the proposed scene agent can synthesize virtual scenes with better affordance maintenance, scene diversity, style maintenance, and 3D intersection over union (3D IoU) compared to state-of-the-art baseline methods. To the best of our knowledge, this is the first work that achieves in situ scene synthesis with virtual-real affordance consistency and user demand. | In Situ 3D Scene Synthesis for Ubiquitous Embodied Interfaces | [
"Haiyan Jiang",
"Song Leiyu",
"dongdong weng",
"Zhe Sun",
"Li Huiying",
"Xiaonuo Dongye",
"Zhenliang Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=hv8ooxGHOK | @inproceedings{
yu2024hkdsme,
title={{HKDSME}: Heterogeneous Knowledge Distillation for Semi-supervised Singing Melody Extraction Using Harmonic Supervision},
author={Shuai Yu and Xiaoliang He and Ke Chen and Yi Yu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=hv8ooxGHOK}
} | Singing melody extraction is a key task in the field of music information retrieval (MIR). However, decades of research works have uncovered two difficult issues. \emph{First}, binary classification on frequency-domain audio features (e.g., spectrogram) is regarded as the primary method, which ignores the potential associations of musical information at different frequency bins, as well as their varying significance for output decisions. \emph{Second}, the existing semi-supervised singing melody extraction models ignore the accuracy of the generated pseudo labels by semi-supervised models, which largely limits the further improvements of the model. To solve the two issues, in this paper, we propose a \underline{h}eterogeneous \underline{k}nowledge \underline{d}istillation framework for \underline{s}emi-supervised singing \underline{m}elody \underline{e}xtraction using harmonic supervision, termed as \emph{HKDSME}. We begin by proposing a four-class classification paradigm for determining the results of singing melody extraction using harmonic supervision. This enables the model to capture more information regarding melodic relations in spectrograms. To improve the accuracy issue of pseudo labels, we then build a semi-supervised method by leveraging the extracted harmonics as a consistent regularization. Different from previous methods, it judges the availability of unlabeled data in terms of the inner positional relations of extracted harmonics.
To further build a light-weight semi-supervised model, we propose a heterogeneous knowledge distillation (HKD) module, which enables the prior knowledge transfers between heterogeneous models. We also propose a novel confidence guided loss, which incorporates with the proposed HKD module to reduce the wrong pseudo labels.
We evaluate our proposed method using several well-known public available datasets, and the findings demonstrate the efficacy of our proposed method. | HKDSME: Heterogeneous Knowledge Distillation for Semi-supervised Singing Melody Extraction Using Harmonic Supervision | [
"Shuai Yu",
"Xiaoliang He",
"Ke Chen",
"Yi Yu"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=htUnCIOjrn | @inproceedings{
zheng2024a,
title={A Unimodal Valence-Arousal Driven Contrastive Learning Framework for Multimodal Multi-Label Emotion Recognition},
author={Wenjie Zheng and Jianfei Yu and Rui Xia},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=htUnCIOjrn}
} | Multimodal Multi-Label Emotion Recognition (MMER) aims to identify one or more emotion categories expressed by an utterance of a speaker. Despite obtaining promising results, previous studies on MMER represent each emotion category using a one-hot vector and ignore the intrinsic relations between emotions. Moreover, existing works mainly learn the unimodal representation based on the multimodal supervision signal of a single sample, failing to explicitly capture the unique emotional state of each modality as well as its emotional correlation between samples. To overcome these issues, we propose a $\textbf{Uni}$modal $\textbf{V}$alence-$\textbf{A}$rousal driven contrastive learning framework (UniVA) for the MMER task. Specifically, we adopt the valence-arousal (VA) space to represent each emotion category and regard the emotion correlation in the VA space as priors to learn the emotion category representation. Moreover, we employ pre-trained unimodal VA models to obtain the VA scores for each modality of the training samples, and then leverage the VA scores to construct positive and negative samples, followed by applying supervised contrastive learning to learn the VA-aware unimodal representations for multi-label emotion prediction. Experimental results on two benchmark datasets MOSEI and M$^3$ED show that the proposed UniVA framework consistently outperforms a number of existing methods for the MMER task. | A Unimodal Valence-Arousal Driven Contrastive Learning Framework for Multimodal Multi-Label Emotion Recognition | [
"Wenjie Zheng",
"Jianfei Yu",
"Rui Xia"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=hqC6aBE1Ow | @inproceedings{
xiao2024unraveling,
title={Unraveling Motion Uncertainty for Local Motion Deblurring},
author={Zeyu Xiao and Zhihe Lu and Michael Bi Mi and Zhiwei Xiong and Xinchao Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=hqC6aBE1Ow}
} | In real-world photography, local motion blur often arises from the interplay between moving objects and stationary backgrounds during exposure. Existing deblurring methods face challenges in addressing local motion deblurring due to (i) the presence of arbitrary localized blurs and uncertain blur extents; (ii) the limited ability to accurately identify specific blurs resulting from ambiguous motion boundaries. These limitations often lead to suboptimal solutions when estimating blur maps and generating final deblurred images. To that end, we propose a novel method named Motion-Uncertainty-Guided Network (MUGNet), which harnesses a probabilistic representational model to explicitly address the intricacies stemming from motion uncertainties. Specifically, MUGNet consists of two key components, i.e., motion-uncertainty quantification (MUQ) module and motion-masked separable attention (M2SA) module, serving for complementary purposes. Concretely, MUQ aims to learn a conditional distribution for accurate and reliable blur map estimation, while the M2SA module is to enhance the representation of regions influenced by local motion blur and static background, which is achieved by promoting the establishment of extensive global interactions. We demonstrate the superiority of our MUGNet with extensive experiments. | Unraveling Motion Uncertainty for Local Motion Deblurring | [
"Zeyu Xiao",
"Zhihe Lu",
"Michael Bi Mi",
"Zhiwei Xiong",
"Xinchao Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=hpiQ0HXtPO | @inproceedings{
bi2024litequic,
title={Lite{QUIC}: Improving QoE of Video Streams by Reducing {CPU} Overhead of {QUIC}},
author={Pengqiang Bi and Yifei Zou and Mengbai Xiao and Dongxiao Yu and yijunli and zhixiong.liu and qunxie},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=hpiQ0HXtPO}
} | QUIC is the underlying protocol of the next generation HTTP/3, serving as the major vehicle delivering video data nowadays. As a userspace protocol based on UDP, QUIC features low transmission latency and has been widely deployed by content providers. However, the high computational overhead of QUIC shifts system knobs to CPUs in high-bandwidth scenarios. When CPU resources become the constraint, HTTP/3 exhibits even lower throughput than HTTP/1.1. In this paper, we carefully analyze the performance bottleneck of QUIC and find it results from ACK processing, packet sending, and data encryption. By reducing the ACK frequency, activating UDP generic segmentation offload (GSO), and incorporating PicoTLS, a high-performance encryption library, the CPU overhead of QUIC could be effectively reduced in stable network environments. However, simply reducing the ACK frequency also impairs the transmission throughput of QUIC under poor network conditions. To solve this, we develop LiteQUIC, which involves two mechanisms towards alleviating the overhead of ACK processing in addition to GSO and PicoTLS. We evaluate LiteQUIC in the DASH-based video streaming, and the results show that LiteQUIC achieves 1.2$\times$ higher average bitrate and 93.3\% lower rebuffering time than an optimized version of QUIC with GSO and PicoTLS. | LiteQUIC: Improving QoE of Video Streams by Reducing CPU Overhead of QUIC | [
"Pengqiang Bi",
"Yifei Zou",
"Mengbai Xiao",
"Dongxiao Yu",
"yijunli",
"zhixiong.liu",
"qunxie"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=hpWtPMxOjm | @inproceedings{
pan2024rethinking,
title={Rethinking the Implicit Optimization Paradigm with Dual Alignments for Referring Remote Sensing Image Segmentation},
author={Yuwen Pan and Rui Sun and Yuan Wang and Tianzhu Zhang and Yongdong Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=hpWtPMxOjm}
} | Referring Remote Sensing Image Segmentation (RRSIS) is a challenging task that aims to identify specific regions in aerial images that are relevant to given textual conditions. Existing methods tend to adopt the paradigm of implicit optimization, utilizing a framework consisting of early cross-modal feature fusion and a fixed convolutional kernel-based predictor, neglecting the inherent inter-domain gap and conducting class-agnostic predictions. In this paper, we rethink the issues with the implicit optimization paradigm and address the RRSIS task from a dual-alignment perspective. Specifically, we prepend the dedicated Dual Alignment Network (DANet), including an explicit alignment strategy and a reliable agent alignment module. The explicit alignment strategy effectively reduces domain discrepancies by narrowing the inter-domain affinity distribution. Meanwhile, the reliable agent alignment module aims to enhance the predictor's multi-modality awareness and alleviate the impact of deceptive noise interference. Extensive experiments on two remote sensing datasets demonstrate the effectiveness of our proposed DANet in achieving superior segmentation performance without introducing additional learnable parameters compared to state-of-the-art methods. | Rethinking the Implicit Optimization Paradigm with Dual Alignments for Referring Remote Sensing Image Segmentation | [
"Yuwen Pan",
"Rui Sun",
"Yuan Wang",
"Tianzhu Zhang",
"Yongdong Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=houuoLg1PT | @inproceedings{
gu2024filo,
title={FiLo: Zero-Shot Anomaly Detection by Fine-Grained Description and High-Quality Localization},
author={Zhaopeng Gu and Bingke Zhu and Guibo Zhu and Yingying Chen and Hao Li and Ming Tang and Jinqiao Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=houuoLg1PT}
} | Zero-shot anomaly detection (ZSAD) methods entail detecting anomalies directly without access to any known normal or abnormal samples within the target item categories. Existing approaches typically rely on the robust generalization capabilities of multimodal pretrained models, computing similarities between manually crafted textual features representing "normal" or "abnormal" semantics and image features to detect anomalies and localize anomalous patches. However, the generic descriptions of "abnormal" often fail to precisely match diverse types of anomalies across different object categories. Additionally, computing feature similarities for single patches struggles to pinpoint specific locations of anomalies with various sizes and scales. To address these issues, we propose a novel ZSAD method called FiLo, comprising two components: adaptively learned Fine-Grained Description (FG-Des) and position-enhanced High-Quality Localization (HQ-Loc). FG-Des introduces fine-grained anomaly descriptions for each category using Large Language Models (LLMs) and employs adaptively learned textual templates to enhance the accuracy and interpretability of anomaly recognition. HQ-Loc, utilizing Grounding DINO for preliminary localization, position-enhanced text prompts, and Multi-scale Multi-shape Cross-modal Interaction (MMCI) module, facilitates more accurate localization of anomalies of different sizes and shapes. Experimental results on datasets like MVTec and VisA demonstrate that FiLo significantly improves the performance of ZSAD in both recognition and localization, achieving state-of-the-art performance with an image-level AUC of 83.9% and a pixel-level AUC of 95.9% on the VisA dataset. | FiLo: Zero-Shot Anomaly Detection by Fine-Grained Description and High-Quality Localization | [
"Zhaopeng Gu",
"Bingke Zhu",
"Guibo Zhu",
"Yingying Chen",
"Hao Li",
"Ming Tang",
"Jinqiao Wang"
] | Conference | poster | 2404.13671 | [
"https://github.com/casia-iva-lab/filo"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=hnbZY5cj6l | @inproceedings{
wu2024tiscorer,
title={T2I-Scorer: Quantitative Evaluation on Text-to-Image Generation via Fine-Tuned Large Multi-Modal Models},
author={Haoning Wu and Xiele Wu and Chunyi Li and Zicheng Zhang and Chaofeng Chen and Xiaohong Liu and Guangtao Zhai and Weisi Lin},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=hnbZY5cj6l}
} | Text-to-image (T2I) generation is a pivotal and core interest within the realm of AI content generation. Amid the swift advancements of both open-source (such as Stable Diffusion) and proprietary (for example, DALLE, MidJourney) T2I models, there is a notable absence of a comprehensive and robust quantitative framework for evaluating their output quality. Traditional methods of quality assessment overlook the textual prompts when judging images; meanwhile, the advent of large multi-modal models (LMMs) introduces the capability to incorporate text prompts in evaluations, yet the challenge of fine-tuning these models for precise T2I quality assessment remains unresolved. In our study, we introduce the T2I-Scorer, a novel two-stage training methodology aimed at fine-tuning LMMs for T2I evaluation. For the first stage, we collect 397K GPT-4V-labeled question-answer pairs related to T2I evaluation. Termed as T2I-ITD, the pseudo-labeled dataset is analyzed and examined by human, and used for instruction tuning to improve the LMM's low-level quality perception. The first stage model, T2I-Scorer-IT, has reached superior accuracy on T2I evaluation than all kinds of existing T2I metrics under zero-shot settings. For the second stage, we define an explicit multi-task training scheme to further align the LMM with human opinion scores, and the fine-tuned T2I-Scorer can reach state-of-the-art accuracy on both image quality and image-text alignment perspectives with significant improvements. We anticipate the proposed metrics can serve as a reliable metric to gauge the ability of T2I generation models in the future. We will make code, data, and weights publicly available. | T2I-Scorer: Quantitative Evaluation on Text-to-Image Generation via Fine-Tuned Large Multi-Modal Models | [
"Haoning Wu",
"Xiele Wu",
"Chunyi Li",
"Zicheng Zhang",
"Chaofeng Chen",
"Xiaohong Liu",
"Guangtao Zhai",
"Weisi Lin"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=hjhIKttXuB | @inproceedings{
lei2024densetrack,
title={DenseTrack: Drone-based Crowd Tracking via Density-aware Motion-appearance Synergy},
author={Yi Lei and Huilin Zhu and Jingling Yuan and Guangli Xiang and Xian Zhong and Shengfeng He},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=hjhIKttXuB}
} | Drone-based crowd tracking faces difficulties in accurately identifying and monitoring objects from an aerial perspective, largely due to their small size and close proximity to each other, which complicates both localization and tracking. To address these challenges, we present the Density-aware Tracking (DenseTrack) framework. DenseTrack capitalizes on crowd counting to precisely determine object locations, blending visual and motion cues to improve the tracking of small-scale objects. It specifically addresses the problem of cross-frame motion to enhance tracking accuracy and dependability. DenseTrack employs crowd density estimates as anchors for exact object localization within video frames. These estimates are merged with motion and position information from the tracking network, with motion offsets serving as key tracking cues. Moreover, DenseTrack enhances the ability to distinguish small-scale objects using insights from the visual language model, integrating appearance with motion cues. The framework utilizes the Hungarian algorithm to ensure the accurate matching of individuals across frames. Demonstrated on DroneCrowd dataset, our approach exhibits superior performance, confirming its effectiveness in scenarios captured by drones. | DenseTrack: Drone-based Crowd Tracking via Density-aware Motion-appearance Synergy | [
"Yi Lei",
"Huilin Zhu",
"Jingling Yuan",
"Guangli Xiang",
"Xian Zhong",
"Shengfeng He"
] | Conference | poster | 2407.17272 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=hj5BoxFpLs | @inproceedings{
jiang2024taskconditional,
title={Task-Conditional Adapter for Multi-Task Dense Prediction},
author={Fengze Jiang and Shuling Wang and Xiaojin Gong},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=hj5BoxFpLs}
} | Multi-task dense prediction plays an important role in the field of computer vision and has an abundant array of applications. Its main purpose is to reduce the amount of network training parameters by sharing network parameters while using the correlation between tasks to improve overall performance. We propose a task-conditional network that handles one task at a time and shares most network parameters to achieve these goals. Inspired by adapter tuning, we propose an adapter module that focuses on both spatial- and channel-wise information to extract features from the frozen encoder backbone. This approach not only reduces the number of training parameters, but also saves training time and memory resources by attaching a parallel adapter pathway to the encoder. We additionally use learnable task prompts to model different tasks and use these prompts to adjust some parameters of adapters to fit the network to diverse tasks. These task-conditional adapters are also applied to the decoder, which enables the entire network to switch between various tasks, producing better task-specific features and achieving excellent performance. Extensive experiments on two challenging multi-task benchmarks, NYUD-v2 and PASCAL-Context, show that our approach achieves state-of-the-art performance with excellent parameter, time, and memory efficiency. The code is available at https://github.com/jfzleo/Task-Conditional-Adapter | Task-Conditional Adapter for Multi-Task Dense Prediction | [
"Fengze Jiang",
"Shuling Wang",
"Xiaojin Gong"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=hgRElsBV6v | @inproceedings{
lin2024hmpear,
title={Hm{PEAR}: A Dataset for Human Pose Estimation and Action Recognition},
author={YiTai Lin and Zhijie Wei and Wanfa Zhang and XiPing Lin and Yudi Dai and Chenglu Wen and Siqi Shen and Lan Xu and Cheng Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=hgRElsBV6v}
} | We introduce HmPEAR, a novel dataset crafted for advancing research in 3D Human Pose Estimation (3D HPE) and Human Action Recognition (HAR), with a primary focus on outdoor environments. This dataset offers a synchronized collection of imagery, LiDAR point clouds, 3D human poses, and action categories. In total, the dataset encompasses over 300,000 frames collected from 10 distinct scenes and 25 diverse subjects. Among these, 250,000 frames of data contain 3D human pose annotations captured using an advanced motion capture system and further optimized for accuracy. Furthermore, the dataset annotates 40 types of daily human actions, resulting in over 6,000 action clips. Through extensive experimentation, we have demonstrated the quality of HmPEAR and highlighted the challenges it presents to current methodologies. Additionally, we propose straightforward baselines leveraging sequential images and point clouds for 3D HPE and HAR, which underscore the mutual reinforcement between them, highlighting the potential for cross-task synergies. | HmPEAR: A Dataset for Human Pose Estimation and Action Recognition | [
"YiTai Lin",
"Zhijie Wei",
"Wanfa Zhang",
"XiPing Lin",
"Yudi Dai",
"Chenglu Wen",
"Siqi Shen",
"Lan Xu",
"Cheng Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=heU5DYjtiz | @inproceedings{
zhang2024lmmpcqa,
title={{LMM}-{PCQA}: Assisting Point Cloud Quality Assessment with {LMM}},
author={Zicheng Zhang and Haoning Wu and Yingjie Zhou and Chunyi Li and Wei Sun and Chaofeng Chen and Xiongkuo Min and Xiaohong Liu and Weisi Lin and Guangtao Zhai},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=heU5DYjtiz}
} | Although large multi-modality models (LMMs) have seen extensive exploration and application in various quality assessment studies, their integration into Point Cloud Quality Assessment (PCQA) remains unexplored. Given LMMs' exceptional performance and robustness in low-level vision and quality assessment tasks, this study aims to investigate the feasibility of imparting PCQA knowledge to LMMs through text supervision. To achieve this, we transform quality labels into textual descriptions during the fine-tuning phase, enabling LMMs to derive quality rating logits from 2D projections of point clouds. To compensate for the loss of perception in the 3D domain, structural features are extracted as well. These quality logits and structural features are then combined and regressed into quality scores. Our experimental results affirm the effectiveness of our approach, showcasing a novel integration of LMMs into PCQA that enhances model understanding and assessment accuracy. We hope our contributions can inspire subsequent investigations into the fusion of LMMs with PCQA, fostering advancements in 3D visual quality analysis and beyond. | LMM-PCQA: Assisting Point Cloud Quality Assessment with LMM | [
"Zicheng Zhang",
"Haoning Wu",
"Yingjie Zhou",
"Chunyi Li",
"Wei Sun",
"Chaofeng Chen",
"Xiongkuo Min",
"Xiaohong Liu",
"Weisi Lin",
"Guangtao Zhai"
] | Conference | oral | 2404.18203 | [
"https://github.com/zzc-1998/lmm-pcqa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=hZYk17jJaf | @inproceedings{
zhao2024autograph,
title={AutoGraph: Enabling Visual Context via Graph Alignment in Open Domain Multi-Modal Dialogue Generation},
author={Deji Zhao and Donghong Han and Ye Yuan and Bo Ning and Li Mengxiang and Zhongjiang He and Shuangyong Song},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=hZYk17jJaf}
} | Open-domain multi-modal dialogue system heavily relies on visual information to generate contextually relevant responses. The existing open-domain multi-modal dialog generation methods ignore the complementary relationship between multiple modalities, and are difficult to integrate with LLMs. To address these issues, we propose an automatically constructed visual context graph method, called AutoGraph. We aim to structure complex information and seamlessly integrate it with large language models (LLMs), aligning information from multiple modalities at both semantic and structural levels. Specifically, we fully connect the text graphs and scene graphs, and then trim unnecessary edges via LLMs to automatically construct a visual context graph. Next, we design several graph sampling grammar for the first time to convert graph structures into sequence which is suitable for LLMs. Finally, we propose a two-stage fine-tuning method to allow LLMs to understand graph sampling grammar and generate responses. The AutoGraph method is a general approach that can enhance the visual capabilities of LLMs. We validate our proposed method on text-based LLMs, and visual-based LLMs, respectively. Experimental results show that our proposed method achieves state-of-the-art performance on multiple public datasets. | AutoGraph: Enabling Visual Context via Graph Alignment in Open Domain Multi-Modal Dialogue Generation | [
"Deji Zhao",
"Donghong Han",
"Ye Yuan",
"Bo Ning",
"Li Mengxiang",
"Zhongjiang He",
"Shuangyong Song"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=hV0uLXU0PY | @inproceedings{
zhang2024testtime,
title={Test-Time Training on Graphs with Large Language Models ({LLM}s)},
author={Jiaxin Zhang and Yiqi Wang and Xihong Yang and Siwei Wang and Yu Feng and Yu Shi and Ren ruichao and En Zhu and Xinwang Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=hV0uLXU0PY}
} | Graph Neural Networks have demonstrated great success in various fields of multimedia. However, the distribution shift between the training and test data challenges the effectiveness of GNNs. To mitigate this challenge, Test-Time Training (TTT) has been proposed as a promising approach. Traditional TTT methods require a demanding unsupervised training strategy to capture the information from test to benefit the main task. Inspired by the great annotation ability of Large Language Models (LLMs) on Text-Attributed Graphs (TAGs), we propose to enhance the test-time training on graphs with LLMs as annotators. In this paper, we design a novel Test-Time Training pipeline, LLMTTT, which conducts the test-time adaptation under the annotations by LLMs on a carefully-selected node set. Specifically, LLMTTT introduces a hybrid active node selection strategy that considers not only node diversity and representativeness, but also prediction signals from the pre-trained model. Given annotations from LLMs, a two-stage training strategy is designed to tailor the test-time model with the limited and noisy labels.
A theoretical analysis ensures the validity of our method and extensive experiments demonstrate that the proposed LLMTTT can achieve a significant performance improvement compared to existing Out-of-Distribution (OOD) generalization methods. | Test-Time Training on Graphs with Large Language Models (LLMs) | [
"Jiaxin Zhang",
"Yiqi Wang",
"Xihong Yang",
"Siwei Wang",
"Yu Feng",
"Yu Shi",
"Ren ruichao",
"En Zhu",
"Xinwang Liu"
] | Conference | poster | 2404.13571 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=hSCBbAhSog | @inproceedings{
xiao2024contrastive,
title={Contrastive Context-Speech Pretraining for Expressive Text-to-Speech Synthesis},
author={Yujia Xiao and Xi Wang and Xu Tan and Lei He and Xinfa Zhu and sheng zhao and Tan Lee},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=hSCBbAhSog}
} | The latest Text-to-Speech (TTS) systems can produce speech with voice quality and naturalness comparable to human speech. Yet the demand for large amount of high-quality data from target speakers remains a significant challenge. Particularly for long-form expressive reading, target speaker's training speech that covers rich contextual information are needed. In this paper a novel design of context-aware speech pre-trained model is developed for expressive TTS based on contrastive learning. The model can be trained with abundant speech data without explicitly labelled speaker identities. It captures the intricate relationship between the speech expression of a spoken sentence and the contextual text information. By incorporating cross-modal text and speech features into the TTS model, it enables the generation of coherent and expressive speech, which is especially beneficial when there is a scarcity of target speaker data. The pre-trained model is evaluated first in the task of Context-Speech retrieval and then as the integral part of a zero-shot TTS system. Experimental results demonstrate that the pretraining framework effectively learns Context-Speech representations and significantly enhances the expressiveness of synthesized speech. Audio demos are available at: https://ccsp2024.github.io/demo/. | Contrastive Context-Speech Pretraining for Expressive Text-to-Speech Synthesis | [
"Yujia Xiao",
"Xi Wang",
"Xu Tan",
"Lei He",
"Xinfa Zhu",
"sheng zhao",
"Tan Lee"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=hQp6qimhbb | @inproceedings{
zhou2024voxinstruct,
title={VoxInstruct: Expressive Human Instruction-to-Speech Generation with Unified Multilingual Codec Language Modelling},
author={Yixuan Zhou and Xiaoyu Qin and Zeyu Jin and Shuoyi Zhou and Shun Lei and Songtao Zhou and Zhiyong Wu and Jia Jia},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=hQp6qimhbb}
} | Recent AIGC systems possess the capability to generate digital multimedia content based on human language instructions, such as text, image, and video.
However, when it comes to speech, existing methods related to human instruction-to-speech generation exhibit two limitations.
Firstly, they require the division of inputs into the content prompt (transcript) and description prompt (style and speaker), instead of directly supporting human instruction. This division is less natural in form and does not align with other AIGC models.
Secondly, the practice of utilizing an independent description prompt to model speech style, without considering the transcript content, restricts the ability to control speech at a fine-grained level.
To address these limitations, we propose VoxInstruct, a novel unified multilingual codec language modeling framework that extends traditional text-to-speech tasks into a general human instruction-to-speech task.
Our approach enhances the expressiveness of human instruction-guided speech generation and aligns the speech generation paradigm with other modalities.
To enable the model to automatically extract the content of synthesized speech from raw text instructions, we introduce speech semantic tokens as an intermediate representation for instruction-to-content guidance.
We also incorporate multiple Classifier-Free Guidance (CFG) strategies into our codec language model, which strengthens the generated speech following human instructions.
Furthermore, our model architecture and training strategies allow for the simultaneous support of combining speech prompt and descriptive human instruction for expressive speech synthesis, which is a first-of-its-kind attempt. | VoxInstruct: Expressive Human Instruction-to-Speech Generation with Unified Multilingual Codec Language Modelling | [
"Yixuan Zhou",
"Xiaoyu Qin",
"Zeyu Jin",
"Shuoyi Zhou",
"Shun Lei",
"Songtao Zhou",
"Zhiyong Wu",
"Jia Jia"
] | Conference | oral | 2408.15676 | [
"https://github.com/thuhcsi/voxinstruct"
] | https://huggingface.co/papers/2408.15676 | 0 | 0 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=hG6Z28Jml2 | @inproceedings{
lin2024crossview,
title={Cross-view Contrastive Unification Guides Generative Pretraining for Molecular Property Prediction},
author={Junyu Lin and Yan Zheng and Xinyue Chen and Yazhou Ren and Xiaorong Pu and Jing He},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=hG6Z28Jml2}
} | Multi-view based molecular properties prediction learning has received widely attention in recent years in terms of its potential for the downstream tasks in the field of drug discovery. However, the consistency of different molecular view representations and the full utilization of complementary information among them in existing multi-view molecular property prediction methods remain to be further explored. Furthermore, most current methods focus on generating global level representations at the graph level with information from different molecular views (e.g., 2D and 3D views) assuming that the information can be corresponded to each other. In fact it is not unusual that for example the conformation change or computational errors may lead to discrepancies between views. To addressing these issues, we propose a new Cross-View contrastive unification guides Generative Molcular pre-trained model, call MolCVG. We first focus on common and private information extraction from 2D graph views and 3D geometric views of molecules, Minimizing the impact of noise in private information on subsequent strategies. To exploit both types of information in a more refined way, we propose a cross-view contrastive unification strategy to learn cross-view global information and guide the reconstruction of masked nodes, thus effectively optimizing global features and local descriptions. Extensive experiments on real-world molecular data sets demonstrate the effectiveness of our approach for molecular property prediction task. | Cross-view Contrastive Unification Guides Generative Pretraining for Molecular Property Prediction | [
"Junyu Lin",
"Yan Zheng",
"Xinyue Chen",
"Yazhou Ren",
"Xiaorong Pu",
"Jing He"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=hCbSq4rpHq | @inproceedings{
mao2024tavgbench,
title={{TAVGB}ench: Benchmarking Text to Audible-Video Generation},
author={Yuxin Mao and Xuyang Shen and Jing Zhang and Zhen Qin and Jinxing Zhou and Mochu Xiang and Yiran Zhong and Yuchao Dai},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=hCbSq4rpHq}
} | The Text to Audible-Video Generation (TAVG) task involves generating videos with accompanying audio based on text descriptions. Achieving this requires skillful alignment of both audio and video elements. To support research in this field, we have developed a comprehensive Text to Audible-Video Generation Benchmark (TAVGBench), which contains over 1.7 million clips with a total duration of 11.8 thousand hours.
We propose an automatic annotation pipeline to ensure each audible video has detailed descriptions for both its audio and video contents.
We also introduce the Audio-Visual Harmoni score (AVHScore) to provide a quantitative measure of the alignment between the generated audio and video modalities.
Additionally, we present a baseline model for TAVG called TAVDiffusion, which uses a two-stream latent diffusion model to provide a fundamental starting point for further research in this area.
We achieve the alignment of audio and video by employing cross-attention and contrastive learning.
Through extensive experiments and evaluations on TAVGBench, we demonstrate the effectiveness of our proposed model under both conventional metrics and our proposed metrics. | TAVGBench: Benchmarking Text to Audible-Video Generation | [
"Yuxin Mao",
"Xuyang Shen",
"Jing Zhang",
"Zhen Qin",
"Jinxing Zhou",
"Mochu Xiang",
"Yiran Zhong",
"Yuchao Dai"
] | Conference | poster | 2404.14381 | [
"https://github.com/opennlplab/tavgbench"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=h9t3wUJsHl | @inproceedings{
wang2024tangramsplatting,
title={Tangram-Splatting: Optimizing 3D Gaussian Splatting Through Tangram-inspired Shape Priors},
author={Yi Wang and Ningze Zhong and Minglin Chen and Longguang Wang and Yulan Guo},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=h9t3wUJsHl}
} | As the growth of VR and AR industry, 3D reconstruction has become a more and more important topic in multimedia. Although 3D Gaussian Splatting is the state-of-the-art method of 3D Reconstruction, it needs a large number of Gaussians to fit a 3D scene due to the Gibbs Phenomenon. The pursuit of compressing 3D Gaussian Splatting and reducing memory overhead has long been a focal point. Embarking on this trajectory, our study delves into this domain, aiming to mitigate these challenges. Inspired by tangram, a Chinese ancient puzzle, we introduce a novel methodology (Tangram-Splatting) that leverages shape priors to optimize 3D scene fitting. Central to our approach is a pioneering technique that diversifies Gaussian function types while preserving algorithmic efficiency. Through exhaustive experimentation, we demonstrate that our method achieves a remarkable average reduction of 62.4\% in memory consumption used to store optimized parameters and decreases the training time by at least 10 minutes, with only marginal sacrifices in PSNR performance, typically under 0.3 dB, and our algorithm is even better on some datasets. This reduction in memory burden is of paramount significance for real-world applications, mitigating the substantial memory footprint and transmission burden traditionally associated with such algorithms. Our algorithm underscores the profound potential of Tangram-Splatting in advancing multimedia applications. | Tangram-Splatting: Optimizing 3D Gaussian Splatting Through Tangram-inspired Shape Priors | [
"Yi Wang",
"Ningze Zhong",
"Minglin Chen",
"Longguang Wang",
"Yulan Guo"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=h6mlfgzB44 | @inproceedings{
fang2024not,
title={Not All Inputs Are Valid: Towards Open-Set Video Moment Retrieval using Language},
author={Xiang Fang and Wanlong Fang and Daizong Liu and Xiaoye Qu and Jianfeng Dong and Pan Zhou and Renfu Li and Zichuan Xu and Lixing Chen and Panpan Zheng and Yu Cheng},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=h6mlfgzB44}
} | As a significant yet challenging multimedia task, Video Moment Retrieval (VMR) targets to retrieve the specific moment corresponding to a sentence query from an untrimmed video. Although recent respectable works have made remarkable progress in this task, they implicitly are rooted in the closed-set assumption that all the given queries as video-relevant1. Given a video-irrelevant OOD query in open-set scenarios, they still utilize it for wrong retrieval, which might lead to irrecoverable losses in high-risk scenarios, e.g., criminal activity detection. To this end, we creatively explore a brand-new VMR setting termed Open-Set Video Moment Retrieval (OS-VMR), where we should not only retrieve the precise moments based on ID query, but also reject OOD queries. In this paper, we make the first attempt to step toward OS-VMR and propose a novel model OpenVMR, which first distinguishes ID and OOD queries based on the normalizing flow technology, and then conducts moment retrieval based on ID queries. Specifically, we first learn the ID distribution by constructing a normalizing flow, and assume the ID query distribution obeys the multi-variate Gaussian distribution. Then, we introduce an uncertainty score to search the ID-OOD separating boundary. After that, we refine the ID-OOD boundary by pulling together ID query features. Besides, video-query matching and frame-query matching are designed for coarse-grained and fine-grained cross-modal interaction, respectively. Finally, a positive-unlabeled learning module is introduced for moment retrieval. Experimental results on three challenging datasets demonstrate the effectiveness of our OpenVMR. | Not All Inputs Are Valid: Towards Open-Set Video Moment Retrieval using Language | [
"Xiang Fang",
"Wanlong Fang",
"Daizong Liu",
"Xiaoye Qu",
"Jianfeng Dong",
"Pan Zhou",
"Renfu Li",
"Zichuan Xu",
"Lixing Chen",
"Panpan Zheng",
"Yu Cheng"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=h3UFAF6sdS | @inproceedings{
yuan2024continual,
title={Continual Panoptic Perception: Towards Multi-modal Incremental Interpretation of Remote Sensing Images},
author={Bo Yuan and Danpei Zhao and Zhuoran Liu and Wentao Li and Tian Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=h3UFAF6sdS}
} | Continual learning (CL) breaks off the one-way training manner and enables a model to adapt to new data, semantics and tasks continuously. However, current CL methods mainly focus on single tasks. Besides, CL models are plagued by catastrophic forgetting and semantic drift since the lack of old data, which often occurs in remote-sensing interpretation due to the intricate fine-grained semantics. In this paper, we propose Continual Panoptic Perception (CPP), a unified continual learning model that leverages multi-task joint learning covering pixel-level classification, instance-level segmentation and image-level perception for universal interpretation in remote sensing images. Concretely, we propose a collaborative cross-modal encoder (CCE) to extract the input image features, which supports pixel classification and caption generation synchronously. To inherit the knowledge from the old model without exemplar memory, we propose a task-interactive knowledge distillation (TKD) method, which leverages cross-modal optimization and task-asymmetric pseudo-labeling (TPL) to alleviate catastrophic forgetting. Furthermore, we also propose a joint optimization mechanism to achieve end-to-end multi-modal panoptic perception. Experimental results on the fine-grained panoptic perception dataset validate the effectiveness of the proposed model, and also prove that joint optimization can boost sub-task CL efficiency with over 13% relative improvement on PQ. | Continual Panoptic Perception: Towards Multi-modal Incremental Interpretation of Remote Sensing Images | [
"Bo Yuan",
"Danpei Zhao",
"Zhuoran Liu",
"Wentao Li",
"Tian Li"
] | Conference | poster | 2407.14242 | [
"https://github.com/YBIO/CPP"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=gz2MPUxW39 | @inproceedings{
chen2024deconfounded,
title={Deconfounded Emotion Guidance Sticker Selection with Causal Inference},
author={Jiali Chen and Yi Cai and Ruohang Xu and Jiexin Wang and Jiayuan Xie and Qing Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gz2MPUxW39}
} | With the increasing popularity of online social applications, stickers have become common in online chats. Teaching a model to select the appropriate sticker from a set of candidate stickers based on dialogue context is important for optimizing the user experience.
Existing methods have proposed leveraging emotional information to facilitate the selection of appropriate stickers. However, considering the frequent co-occurrence among sticker images, words with emotional preference in the dialogue and emotion labels, these methods tend to over-rely on such dataset bias, inducing spurious correlations during training. As a result, these methods may select inappropriate stickers that do not match users' intended expression. In this paper, we introduce a causal graph to explicitly identify the spurious correlations in the sticker selection task. Building upon the analysis, we propose a Causal Knowledge-Enhanced Sticker Selection model to mitigate spurious correlations. Specifically, we design a knowledge-enhanced emotional utterance extractor to identify emotional information within dialogues. Then an interventional visual feature extractor is employed to obtain unbiased visual features, aligning them with the emotional utterances representation. Finally, a standard transformer encoder fuses the multimodal information for emotion recognition and sticker selection. Extensive experiments on the MOD dataset show that our CKS model significantly outperforms the baseline models. | Deconfounded Emotion Guidance Sticker Selection with Causal Inference | [
"Jiali Chen",
"Yi Cai",
"Ruohang Xu",
"Jiexin Wang",
"Jiayuan Xie",
"Qing Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=gya4aLgYTa | @inproceedings{
tang2024minigptd,
title={Mini{GPT}-3D: Efficiently Aligning 3D Point Clouds with Large Language Models using 2D Priors},
author={Yuan Tang and Xu Han and Xianzhi Li and Qiao Yu and yixue Hao and Long Hu and Min Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gya4aLgYTa}
} | Large 2D vision-language models (2D-LLMs) have gained significant attention by bridging Large Language Models (LLMs) with images using a simple projector. Inspired by their success, large 3D point cloud-language models (3D-LLMs) also integrate point clouds into LLMs. However, directly aligning point clouds with LLM requires expensive training costs, typically in hundreds of GPU-hours on A100, which hinders the development of 3D-LLMs. In this paper, we introduce MiniGPT-3D, an efficient and powerful 3D-LLM that achieves multiple SOTA results while training for only 27 hours on one RTX 3090. Specifically, we propose to align 3D point clouds with LLMs using 2D priors from 2D-LLMs, which can leverage the similarity between 2D and 3D visual information. We introduce a novel four-stage training strategy for modality alignment in a cascaded way, and a mixture of query experts module to adaptively aggregate features with high efficiency. Moreover, we utilize parameter-efficient fine-tuning methods LoRA and Norm fine-tuning, resulting in only 47.8M learnable parameters, which is up to 260x fewer than existing methods. Extensive experiments show that MiniGPT-3D achieves SOTA on 3D object classification and captioning tasks, with significantly cheaper training costs. Notably, MiniGPT-3D gains an 8.12 increase on GPT-4 evaluation score for the challenging object captioning task compared to ShapeLLM-13B, while the latter costs 160 total GPU-hours on 8 A800. We are the first to explore the efficient 3D-LLM, offering new insights to the community. We will release the code and weights after review. | MiniGPT-3D: Efficiently Aligning 3D Point Clouds with Large Language Models using 2D Priors | [
"Yuan Tang",
"Xu Han",
"Xianzhi Li",
"Qiao Yu",
"yixue Hao",
"Long Hu",
"Min Chen"
] | Conference | poster | 2405.01413 | [
"https://github.com/tangyuan96/minigpt-3d"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=gyBUTDsUDR | @inproceedings{
liu2024rose,
title={RoSe: Rotation-Invariant Sequence-Aware Consensus for Robust Correspondence Pruning},
author={Yizhang Liu and Weiwei Zhou and Yanping Li and Shengjie Zhao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gyBUTDsUDR}
} | Correspondence pruning has recently drawn considerable attention as a crucial step in image matching. Existing methods typically achieve this by constructing neighborhoods for each feature point and imposing neighborhood consistency. However, the nearest-neighbor matching strategy often results in numerous many-to-one correspondences, thereby reducing the reliability of neighborhood information. Furthermore, the smoothness constraint fails in cases of large-scale rotations, leading to misjudgments. To address the above issues, this paper proposes a novel robust correspondence pruning method termed RoSe, which is based on rotation-invariant sequence-aware consensus. We formulate the correspondence pruning problem as a mathematical optimization problem and derive a closed-form solution. Specifically, we devise a rectified local neighborhood construction strategy that effectively enlarges the distribution between inliers and outliers. Meanwhile, to accommodate large-scale rotation, we propose a relative sequence-aware consistency as an alternative to existing smoothness constraints, which can better characterize the topological structure of inliers. Experimental results on image matching and registration tasks demonstrate the effectiveness of our method. Robustness analysis involving diverse feature descriptors and varying rotation degrees further showcases the efficacy of our method. | RoSe: Rotation-Invariant Sequence-Aware Consensus for Robust Correspondence Pruning | [
"Yizhang Liu",
"Weiwei Zhou",
"Yanping Li",
"Shengjie Zhao"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=gvrDYlQXxw | @inproceedings{
wu2024compacter,
title={Compacter: A Lightweight Transformer for Image Restoration},
author={Zhijian Wu and Jun Li and Yang Hu and Dingjiang Huang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gvrDYlQXxw}
} | Although deep learning-based methods have made significant advances in the field of image restoration (IR), they often suffer from excessive model parameters. To tackle this problem, this work proposes a compact Transformer (Compacter) for lightweight image restoration by making several key designs. We employ the concepts of projection sharing, adaptive interaction, and heterogeneous aggregation to develop a novel Compact Adaptive Self-Attention (CASA). Specifically, CASA utilizes shared projection to generate Query, Key, and Value to simultaneously model spatial and channel-wise self-attention. The adaptive interaction process is then used to propagate and integrate global information from two different dimensions, thus enabling omnidirectional relational interaction. Finally, a depth-wise convolution is incorporated on Value to complement heterogeneous local information, enabling global-local coupling. Moreover, we propose a Dual Selective Gated Module (DSGM) to dynamically encapsulate the globality into each pixel for context-adaptive aggregation. Extensive experiments demonstrate that our Compacter achieves state-of-the-art performance for a variety of lightweight IR tasks with approximately 400K parameters. | Compacter: A Lightweight Transformer for Image Restoration | [
"Zhijian Wu",
"Jun Li",
"Yang Hu",
"Dingjiang Huang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=gsdQJCWWXl | @inproceedings{
hong2024navigating,
title={Navigating Beyond Instructions: Vision-and-Language Navigation in Obstructed Environments},
author={Haodong Hong and Sen Wang and Zi Huang and Qi Wu and Jiajun Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gsdQJCWWXl}
} | Real-world navigation often involves dealing with unexpected obstructions such as closed doors, moved objects, and unpredictable entities. However, mainstream Vision-and-Language Navigation (VLN) tasks typically assume instructions perfectly align with the fixed and predefined navigation graphs without any obstructions. This assumption overlooks potential discrepancies in actual navigation graphs and given instructions, which can cause major failures for both indoor and outdoor agents. To address this issue, we integrate diverse obstructions into the R2R dataset by modifying both the navigation graphs and visual observations, introducing an innovative dataset and task, R2R with UNexpected Obstructions (R2R-UNO). R2R-UNO contains various types and numbers of path obstructions to generate instruction-reality mismatches for VLN research. Experiments on R2R-UNO reveal that state-of-the-art VLN methods inevitably encounter significant challenges when facing such mismatches, indicating that they rigidly follow instructions rather than navigate adaptively. Therefore, we propose a novel method called ObVLN (Obstructed VLN), which includes a curriculum training strategy and virtual graph construction to help agents effectively adapt to obstructed environments. Empirical results show that ObVLN not only maintains robust performance in unobstructed scenarios but also achieves a substantial performance advantage with unexpected obstructions. The source code is available at \url{https://anonymous.4open.science/r/ObstructedVLN-D579}. | Navigating Beyond Instructions: Vision-and-Language Navigation in Obstructed Environments | [
"Haodong Hong",
"Sen Wang",
"Zi Huang",
"Qi Wu",
"Jiajun Liu"
] | Conference | oral | 2407.21452 | [
"https://github.com/honghd16/ObstructedVLN"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=grQ5lJthmM | @inproceedings{
bi2024prifu,
title={Pri{FU}: Capturing Task-Relevant Information Without Adversarial Learning},
author={Xiuli Bi and Yang Hu and Bo Liu and Weisheng Li and Pamela Cosman and Bin Xiao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=grQ5lJthmM}
} | As machine learning advances, machine learning as a service (MLaaS) in the cloud brings convenience to human lives but also privacy risks, as powerful neural networks used for generation, classification or other tasks can also become privacy snoopers. This motivates privacy preservation in the inference phase. Many approaches for preserving privacy in the inference phase introduce multi-objective functions, training models to remove specific private information from users' uploaded data. Although effective, these adversarial learning-based approaches suffer not only from convergence difficulties, but also from limited generalization beyond the specific privacy for which they are trained. To address these issues, we propose a method for privacy preservation in the inference phase by removing task-irrelevant information, which requires no knowledge of the privacy attacks nor introduction of adversarial learning. Specifically, we introduce a metric to distinguish task-irrelevant information from task-relevant information, and achieve more efficient metric estimation to remove task-irrelevant features. The experiments demonstrate the potential of our method in several tasks. | PriFU: Capturing Task-Relevant Information Without Adversarial Learning | [
"Xiuli Bi",
"Yang Hu",
"Bo Liu",
"Weisheng Li",
"Pamela Cosman",
"Bin Xiao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=gm7MVAhN3v | @inproceedings{
chen2024joint,
title={Joint Homophily and Heterophily Relational Knowledge Distillation for Efficient and Compact 3D Object Detection},
author={Shidi Chen and Lili Wei and Liqian Liang and Congyan Lang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gm7MVAhN3v}
} | 3D Object Detection (3DOD) aims to accurately locate and identify 3D objects in point clouds, facing the challenge of balancing model performance with computational efficiency. Knowledge distillation emerges as a vital method for model compression in 3DOD, transferring knowledge from complex, larger models to smaller, efficient ones. However, the effectiveness of these methods is constrained by the intrinsic sparsity and structural complexity of point clouds. In this paper, we propose a novel methodology termed Joint Homophily and Heterophily Relational Knowledge Distillation (H2RKD) to distill robust relational knowledge in point clouds, thereby enhancing intra-object similarity and refining inter-object distinction. This unified strategy encompasses the integration of Collaborative Global Distillation (CGD) for distilling global relational knowledge across both distance and angular dimensions, and Separate Local Distillation (SLD) for a focused distillation of local relational dynamics. By seamlessly leveraging the relational dynamics within point clouds, the H2RKD facilitates a comprehensive knowledge transfer, significantly advancing 3D object detection capabilities. Extensive experiments on KITTI and unScenes datasets demonstrate the effectiveness of the proposed H2RKD. | Joint Homophily and Heterophily Relational Knowledge Distillation for Efficient and Compact 3D Object Detection | [
"Shidi Chen",
"Lili Wei",
"Liqian Liang",
"Congyan Lang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=gkdqXsTvWO | @inproceedings{
chen2024connectivitybased,
title={Connectivity-based Cerebrovascular Segmentation in Time-of-Flight Magnetic Resonance Angiography},
author={Zan Chen and Xiao Yu and Yuanjing Feng},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gkdqXsTvWO}
} | Accurate segmentation of cerebrovascular structures from TOF-MRA is vital for treating cerebrovascular diseases. However, existing methods rely on voxel categorization, leading to discontinuities in fine vessel locations. We propose a connectivity-based cerebrovascular segmentation method that considers inter-voxel relationships to overcome this limitation. By modeling connectivity, we transform voxel classification into predicting inter-voxel connectivity. Given cerebrovascular structures' sparse and widely distributed nature, we employ sparse 3D Bi-level routing attention to reduce computational overhead while effectively capturing cerebrovascular features. To enhance directional information extraction, we utilize the 3D-direction excitation block. Additionally, the 3D-direction interactive block continuously augments direction information in the feature map and sends it to the skip connection. We compare our method with current state-of-the-art cerebrovascular segmentation techniques and classical medical image segmentation methods using clinical and open cerebrovascular datasets. Our method demonstrates superior performance, outperforming existing approaches. Ablation experiments further validate the effectiveness of our proposed method. | Connectivity-based Cerebrovascular Segmentation in Time-of-Flight Magnetic Resonance Angiography | [
"Zan Chen",
"Xiao Yu",
"Yuanjing Feng"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=giE4a5ihMR | @inproceedings{
wang2024cascaded,
title={Cascaded Adversarial Attack: Simultaneously Fooling Rain Removal and Semantic Segmentation Networks},
author={Zhiwen Wang and Yuhui Wu and Zheng WANG and Jiwei Wei and Tianyu Li and Guoqing Wang and Yang Yang and Heng Tao Shen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=giE4a5ihMR}
} | When applying high-level visual algorithms to rainy scenes, it is customary to preprocess the rainy images using low-level rain removal networks, followed by visual networks to achieve the desired objectives. Such a setting has never been explored by adversarial attack methods, which are only limited to attacking one kind of them. Considering the deficiency of multi-functional attacking strategies and the significance for open-world perception scenarios, we are the first to propose a Cascaded Adversarial Attack (CAA) setting, where the adversarial example can simultaneously attack different-level tasks, such as rain removal and semantic segmentation in an integrated system. Specifically, our attack on the rain removal network aims to preserve rain streaks in the output image, while for the semantic segmentation network, we employ powerful existing adversarial attack methods to induce misclassification of the image content. Importantly, CAA innovatively utilizes binary masks to effectively concentrate the aforementioned two significantly disparate perturbation distributions on the input image, enabling attacks on both networks. Additionally, we propose two variants of CAA, which minimize the differences between the two generated perturbations by introducing a carefully designed perturbation interaction mechanism, resulting in enhanced attack performance. Extensive experiments validate the effectiveness of our methods, demonstrating their superior ability to significantly degrade the performance of the downstream task compared to methods that solely attack a single network. | Cascaded Adversarial Attack: Simultaneously Fooling Rain Removal and Semantic Segmentation Networks | [
"Zhiwen Wang",
"Yuhui Wu",
"Zheng WANG",
"Jiwei Wei",
"Tianyu Li",
"Guoqing Wang",
"Yang Yang",
"Heng Tao Shen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=ghVe4Tn2HH | @inproceedings{
luo2024d,
title={3D Gaussian Editing with A Single Image},
author={Guan Luo and Tian-Xing Xu and Ying-Tian Liu and Xiaoxiong Fan and Fang-Lue Zhang and Song-Hai Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=ghVe4Tn2HH}
} | The modeling and manipulation of 3D scenes captured from the real world are pivotal in various applications, attracting growing research interest. While previous works on editing have achieved interesting results through manipulating 3D meshes, they often require accurately reconstructed meshes to perform editing, which limits their application in 3D content generation. To address this gap, we introduce a novel single-image-driven 3D scene editing approach based on 3D Gaussian Splatting, enabling intuitive manipulation via directly editing the content on a 2D image plane. Our method learns to optimize the 3D Gaussians to align with an edited version of the image rendered from a user-specified viewpoint of the original scene. To capture long-range object deformation, we introduce positional loss into the optimization process of 3D Gaussian Splatting and enable gradient propagation through reparameterization. To handle occluded 3D Gaussians when rendering from the specified viewpoint, we build an anchor-based structure and employ a coarse-to-fine optimization strategy capable of handling long-range deformation while maintaining structural stability. Furthermore, we design a novel masking strategy that adaptively identifies non-rigid deformation regions for fine-scale modeling. Extensive experiments show the effectiveness of our method in handling geometric details, long-range, and non-rigid deformation, demonstrating superior editing flexibility and quality compared to previous approaches. | 3D Gaussian Editing with A Single Image | [
"Guan Luo",
"Tian-Xing Xu",
"Ying-Tian Liu",
"Xiaoxiong Fan",
"Fang-Lue Zhang",
"Song-Hai Zhang"
] | Conference | poster | 2408.07540 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=ga3z0S7Ahg | @inproceedings{
yan2024categoryprompt,
title={Category-Prompt Refined Feature Learning for Long-Tailed Multi-Label Image Classification},
author={Jiexuan Yan and Sheng Huang and Nankun Mu and Luwen Huangfu and Bo Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=ga3z0S7Ahg}
} | Real-world data consistently exhibits a long-tailed distribution, often spanning multiple categories. This complexity underscores the challenge of content comprehension, particularly in scenarios requiring long-tailed multi-label image classification (LTMLC). In such contexts, imbalanced data distribution and multi-object recognition pose significant hurdles. To address this issue, we propose a novel and effective approach for LTMLC, termed Category-Prompt Refined Feature Learning (CPRFL), utilizing semantic correlations between different categories and decoupling category-specific visual representations for each category. Specifically, CPRFL initializes category-prompts from the pretrained CLIP's embeddings and decouples category-specific visual representations through interaction with visual features, thereby facilitating the establishment of semantic correlations between the head and tail classes. To mitigate the visual-semantic domain bias, we design a progressive Dual-Path Back-Propagation mechanism to refine the prompts by progressively incorporating context-related visual information into prompts. Simultaneously, the refinement process facilitates the progressive purification of the category-specific visual representations under the guidance of the refined prompts. Furthermore, taking into account the negative-positive sample imbalance, we adopt the Asymmetric Loss as our optimization objective to suppress negative samples across all classes and potentially enhance the head-to-tail recognition performance. We validate the effectiveness of our method on two LTMLC benchmarks and extensive experiments demonstrate the superiority of our work over baselines. | Category-Prompt Refined Feature Learning for Long-Tailed Multi-Label Image Classification | [
"Jiexuan Yan",
"Sheng Huang",
"Nankun Mu",
"Luwen Huangfu",
"Bo Liu"
] | Conference | poster | 2408.08125 | [
"https://github.com/jiexuanyan/cprfl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=gYxocD2XGO | @inproceedings{
chen2024efficiency,
title={Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Models},
author={Jiawei Chen and Dingkang Yang and Yue Jiang and Mingcheng Li and Jinjie Wei and Xiaolu Hou and Lihua Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gYxocD2XGO}
} | In the realm of Medical Visual Language Models (VLMs), the quest for universal efficient fine-tuning mechanisms remains paramount, especially given researchers in interdisciplinary fields are often extremely short of training resources, yet largely unexplored.
Most of the current Parameter-Efficient Fine-Tuning(PEFT) methods, not only have not been comprehensively evaluated on Med-VLMs but also mostly focus on adding some components to the model's structure or input. However, fine-tuning intrinsic model components often yields better generality and consistency, and its impact on the ultimate performance of Med-VLMs has been widely overlooked and remains understudied. In this paper, we endeavour to explore an alternative to traditional PEFT methods, especially the impact of fine-tuning LayerNorm and Attention layers on Med-VLM. Our comprehensive study spans both small-scale and large-scale Med-VLMs, evaluating their performance under various fine-tuning paradigms across tasks such as Medical Visual Question Answering and Medical Imaging Report Generation. The findings reveal that fine-tuning solely the LayerNorm layers not only surpasses the efficiency of traditional PEFT methods but also retains the model's accuracy and generalization capabilities across a spectrum of medical downstream tasks. The experiments demonstrate LayerNorm fine-tuning's superior adaptability and scalability, particularly in the context of large-scale medical VLMs. We hope this work will contribute to the ongoing discourse on optimizing efficient fine-tuning strategies for medical VLMs. | Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Models | [
"Jiawei Chen",
"Dingkang Yang",
"Yue Jiang",
"Mingcheng Li",
"Jinjie Wei",
"Xiaolu Hou",
"Lihua Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=gShfQto5eL | @inproceedings{
gao2024define,
title={De-fine: Decomposing and Refining Visual Programs with Auto-Feedback},
author={Minghe Gao and Juncheng Li and Hao Fei and Liang Pang and Wei Ji and Guoming Wang and Zheqi Lv and Wenqiao Zhang and Siliang Tang and Yueting Zhuang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gShfQto5eL}
} | Visual programming, a modular and generalizable paradigm, integrates different modules and Python operators to solve various vision-language tasks. Unlike end-to-end models that need task-specific data, it advances in performing visual processing and reasoning in an unsupervised manner. Current visual programming methods generate programs in a single pass for each task where the ability to evaluate and optimize based on feedback, unfortunately, is lacking, which consequentially limits their effectiveness for complex, multi-step problems. Drawing inspiration from benders decomposition, we introduce De-fine, a training-free framework that automatically decomposes complex tasks into simpler subtasks and refines programs through auto-feedback. This model-agnostic approach can improve logical reasoning performance by integrating the strengths of multiple models. Our experiments across various visual tasks show that De-fine creates more accurate and robust programs, setting new benchmarks in the field. The anonymous project is available at https://anonymous.4open.science/r/De-fine_Program-FE15 | De-fine: Decomposing and Refining Visual Programs with Auto-Feedback | [
"Minghe Gao",
"Juncheng Li",
"Hao Fei",
"Liang Pang",
"Wei Ji",
"Guoming Wang",
"Zheqi Lv",
"Wenqiao Zhang",
"Siliang Tang",
"Yueting Zhuang"
] | Conference | oral | 2311.12890 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=gQLvhZiUuA | @inproceedings{
sun2024d,
title={3D Question Answering for City Scene Understanding},
author={Penglei Sun and Yaoxian Song and Xiang Liu and Xiaofei Yang and Qiang Wang and tiefeng li and Yang YANG and Xiaowen Chu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gQLvhZiUuA}
} | 3D multimodal question answering (MQA) plays a crucial role in scene understanding by enabling intelligent agents to comprehend their surroundings in 3D environments.
While existing research has primarily focused on indoor household tasks and outdoor roadside autonomous driving tasks, there has been limited exploration of city-level scene understanding tasks.
Furthermore, existing research faces challenges in understanding city scenes, due to the absence of spatial semantic information and human-environment interaction information at the city level.
To address these challenges, we investigate 3D MQA from both dataset and method perspectives.
From the dataset perspective, we introduce a novel 3D MQA dataset named City-3DQA for city-level scene understanding,
which is the first dataset to incorporate scene semantic and human-environment interactive tasks within the city.
From the method perspective, we propose a Scene graph enhanced City-level Understanding method (Sg-CityU), which utilizes the scene graph to introduce the spatial semantic.
A new benchmark is reported and our proposed Sg-CityU achieves accuracy of 63.94 % and 63.76 % in different settings of City-3DQA.
Compared to indoor 3D MQA methods and zero-shot using advanced large language models (LLMs), Sg-CityU demonstrates state-of-the-art (SOTA) performance in robustness and generalization.
Our dataset and code are available on our project website\footnote{\url{https://sites.google.com/view/city3dqa/}}. | 3D Question Answering for City Scene Understanding | [
"Penglei Sun",
"Yaoxian Song",
"Xiang Liu",
"Xiaofei Yang",
"Qiang Wang",
"tiefeng li",
"Yang YANG",
"Xiaowen Chu"
] | Conference | poster | 2407.17398 | [
""
] | https://huggingface.co/papers/2407.17398 | 4 | 21 | 4 | 8 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=gP6pKHuIaP | @inproceedings{
tang2024symattack,
title={SymAttack: Symmetry-aware Imperceptible Adversarial Attacks on 3D Point Clouds},
author={Keke Tang and Zhensu Wang and Weilong Peng and Lujie Huang and Le Wang and Peican Zhu and Wenping Wang and Zhihong Tian},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gP6pKHuIaP}
} | Adversarial attacks on point clouds are crucial for assessing and improving the adversarial robustness of 3D deep learning models. Despite leveraging various geometric constraints, current adversarial attack strategies often suffer from inadequate imperceptibility. Given that adversarial perturbations tend to disrupt the inherent symmetry in objects, we recognize this disruption as the primary cause of the lack of imperceptibility in these attacks. In this paper, we introduce a novel framework, symmetry-aware imperceptible adversarial attacks on 3D point clouds (SymAttack), to address this issue. Our approach starts by identifying part- and patch-level symmetry elements, and grouping points based on semantic and Euclidean distances, respectively. During the adversarial attack iterations, we intentionally adjust the perturbation vectors on symmetric points relative to their symmetry plane. By preserving symmetry within the attack process, SymAttack significantly enhances imperceptibility. Extensive experiments validate the effectiveness of SymAttack in generating imperceptible adversarial point clouds, demonstrating its superiority over the state-of-the-art methods. Codes will be made public upon paper acceptance. | SymAttack: Symmetry-aware Imperceptible Adversarial Attacks on 3D Point Clouds | [
"Keke Tang",
"Zhensu Wang",
"Weilong Peng",
"Lujie Huang",
"Le Wang",
"Peican Zhu",
"Wenping Wang",
"Zhihong Tian"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=gJxDojrjaQ | @inproceedings{
sun2024eggen,
title={{EGG}en: Image Generation with Multi-entity Prior Learning through Entity Guidance},
author={Zhenhong Sun and Junyan Wang and Zhiyu Tan and Daoyi Dong and Hailan Ma and Hao Li and Dong Gong},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gJxDojrjaQ}
} | Diffusion models have shown remarkable prowess in text-to-image synthesis and editing, yet they often stumble when tasked with interpreting complex prompts that describe multiple entities with specific attributes and interrelations. The generated images often contain inconsistent multi-entity representation (IMR), reflected as inaccurate presentations of the multiple entities and their attributes. Although providing spatial layout guidance improves the multi-entity generation quality in existing works, it is still challenging to handle the leakage attributes and avoid unnatural characteristics. To address the IMR challenge, we first conduct in-depth analyses of the diffusion process and attention operation, revealing that the IMR challenges largely stem from the process of cross-attention mechanisms. According to the analyses, we introduce the entity guidance generation mechanism, which maintains the integrity of the original diffusion model parameters by integrating plug-in networks. Our work advances the stable diffusion model by segmenting comprehensive prompts into distinct entity-specific prompts with bounding boxes, enabling a transition from multi-entity to single-entity generation in cross-attention layers. More importantly, we introduce entity-centric cross-attention layers that focus on individual entities to preserve their uniqueness and accuracy, alongside global entity alignment layers that refine cross-attention maps using multi-entity priors for precise positioning and attribute accuracy. Additionally, a linear attenuation module is integrated to progressively reduce the influence of these layers during inference, preventing oversaturation and preserving generation fidelity. Our comprehensive experiments demonstrate that this entity guidance generation enhances existing text-to-image models in generating detailed, multi-entity images. | EGGen: Image Generation with Multi-entity Prior Learning through Entity Guidance | [
"Zhenhong Sun",
"Junyan Wang",
"Zhiyu Tan",
"Daoyi Dong",
"Hailan Ma",
"Hao Li",
"Dong Gong"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=gFXwwyMCZ9 | @inproceedings{
xu2024point,
title={Point Cloud Upsampling With Geometric Algebra Driven Inverse Heat Dissipation},
author={Wenqiang Xu and Wenrui Dai and Ziyang Zheng and Chenglin Li and Junni Zou and Hongkai Xiong},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gFXwwyMCZ9}
} | Point cloud upsampling is crucial for 3D reconstruction, with recent research significantly benefitting from the advances in deep learning technologies. The majority of existing methods, which focus on a sequence of processes including feature extraction, augmentation, and the reconstruction of coordinates, encounter significant challenges in interpreting the geometric attributes they uncover, particularly with respect to the intricacies of transitioning feature dimensionality. In this paper, we delve deeper into modeling Partial Differential Equations (PDEs) specifically tailored for the inverse heat dissipation process in dense point clouds. Our goal is to detect gradients within the dense point cloud data distribution and refine the accuracy of interpolated points’ positions along with their complex geometric nuances through a systematic iterative approximation method. Simultaneously, we adopt multivectors from geometric algebra as the primary tool for representing the geometric characteristics of point clouds, moving beyond the conventional vector space representations. The use of geometric products of multivectors enables us to capture the complex relationships between scalars, vectors, and their components more effectively. This methodology not only offers a robust framework for depicting the geometric features of point clouds but also enhances our modeling capabilities for inverse heat dissipation PDEs. Through both qualitative and quantitative assessments, we demonstrate that our results significantly outperform existing state-of-the-art techniques in terms of widely recognized point cloud evaluation metrics and 3D visual reconstruction fidelity. | Point Cloud Upsampling With Geometric Algebra Driven Inverse Heat Dissipation | [
"Wenqiang Xu",
"Wenrui Dai",
"Ziyang Zheng",
"Chenglin Li",
"Junni Zou",
"Hongkai Xiong"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=gFXXDAZZjL | @inproceedings{
kuang2024latent,
title={Latent Representation Reorganization for Face Privacy Protection},
author={Zhenzhong Kuang and Jianan Lu and Chenhui Hong and Haobin Huang and Suguo Zhu and Xiaowei Zhao and Jun Yu and Jianping Fan},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gFXXDAZZjL}
} | The issue of face privacy protection has aroused wide social concern along with the increasing applications of face images. The latest methods focus on achieving a good privacy-utility tradeoff so that the protected results can still be used to support the downstream computer vision tasks. However, they may suffer from limited flexibility in manipulating this tradeoff because the practical requirements may vary under different scenarios. In this paper, we present a two-stage latent representation reorganization (LReOrg) framework for face image privacy protection relying on our conditional bidirectional network which is optimized by using a distinct keyword-based swap training strategy with a multi-task loss. The privacy sensitive information are anonymized in the first stage and the destroyed useful information are recovered in the second stage according to user requirements. LReOrg is advantageous in: (a) enabling users to recurrently process fine-grained attributes; (b) providing flexible control over privacy-utility tradeoff by manipulating which attributes to anonymize or preserve using cross-modal keywords; and (c) eliminating the need of data annotations for network training. The experimental results on benchmark datasets have reported the superior ability of our approach for providing flexible protection on facial information. | Latent Representation Reorganization for Face Privacy Protection | [
"Zhenzhong Kuang",
"Jianan Lu",
"Chenhui Hong",
"Haobin Huang",
"Suguo Zhu",
"Xiaowei Zhao",
"Jun Yu",
"Jianping Fan"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=gEI5nN0Fpx | @inproceedings{
liang2024high,
title={High Fidelity Aggregated Planar Prior Assisted PatchMatch Multi-View Stereo},
author={Jie Liang and Rongjie Wang and Rui Peng and Zhe Zhang and Kaiqiang Xiong and Ronggang Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gEI5nN0Fpx}
} | The quality of 3D models reconstructed by PatchMatch Multi-View Stereo remains a challenging problem due to unreliable photometric consistency in object boundaries and textureless areas. Since textureless areas usually exhibit strong planarity, previous methods used planar prior and significantly improved the reconstruction performance. However, their planar prior ignores the depth discontinuity at the object boundary, making the boundary inaccurate (not sharp). In addition, due to the unreliable planar models in large-scale low-textured objects, the reconstruction results are incomplete. To address the above issues, we introduce the segmentation generated from Segment Anything Model into PM pipelines for the first time. We use segmentation to determine whether the depth is continuous based on the characteristics of segmentation and depth sharing boundaries. Then we segment planes at object boundaries and enhance the consistency of planes in objects. Specifically, we construct $\textbf{Boundary Plane}$ that fits the object boundary and $\textbf{Object Plane}$ to increase consistency of planes in large-scale textureless objects. Finally, we use a probability graph model to calculate the $\textbf{Aggregated Prior guided by Multiple Planes}$ and embed it into the matching cost. The experimental results indicate that our method achieves state-of-the-art in terms of boundary sharpness on ETH3D. And it also significantly improves the completeness weakly textured objects. We also validated the generalization of our method on Tanks\&Temples. | High Fidelity Aggregated Planar Prior Assisted PatchMatch Multi-View Stereo | [
"Jie Liang",
"Rongjie Wang",
"Rui Peng",
"Zhe Zhang",
"Kaiqiang Xiong",
"Ronggang Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=gDXAv76iP2 | @inproceedings{
huang2024remembering,
title={Remembering is Not Applying: Interpretable Knowledge Tracing for Problem-solving Processes},
author={Tao Huang and Xinjia Ou and Yanghuali and Shengze Hu and Jing Geng and Junjie Hu and Zhuoran Xu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gDXAv76iP2}
} | Knowledge Tracing (KT) is a critical service in distance education, predicting students' future performance based on their responses to learning resources. The reasonable assessment of the knowledge state, along with accurate response prediction, is crucial for KT. However, existing KT methods prioritize fitting results and overlook attention to the problem-solving process. They equate the knowledge students memorize before problem-solving with the knowledge that can be acquired or applied during problem-solving, leading to dramatic fluctuations in knowledge states between mastery and non-mastery, with low interpretability. This paper explores knowledge transformation in problem-solving and proposes an interpretable model, Problem-Solving Knowledge Tracing (PSKT). Specifically, we first present a knowledge-centered problem representation that enhances its expression by adjusting problem variability. Then, we meticulously designed a Sequential Neural Network (SNN) with three stages: (1) Before problem-solving, we model students' personalized problem space and simulate their acquisition of problem-related knowledge through a gating mechanism. (2) During problem-solving, we evaluate knowledge application and calculate response with a four-parameter IRT. (3) After problem-solving, we quantify student knowledge internalization and forgetting using an incremental indicator. The SNN, inspired by problem-solving and constructivist learning theories, is an interpretable model that attributes learner performance to subjective problems (difficulty, discrimination), objective knowledge (knowledge acquisition and application), and behavior (guessing and slipping). Finally, extensive experimental results demonstrate that PSKT has certain advantages in predicting accuracy, assessing knowledge states reasonably, and explaining the learning process. | Remembering is Not Applying: Interpretable Knowledge Tracing for Problem-solving Processes | [
"Tao Huang",
"Xinjia Ou",
"Yanghuali",
"Shengze Hu",
"Jing Geng",
"Junjie Hu",
"Zhuoran Xu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=gBbYMmDOC3 | @inproceedings{
kou2024subjectivealigned,
title={Subjective-Aligned Dataset and Metric for Text-to-Video Quality Assessment},
author={Tengchuan Kou and Xiaohong Liu and Zicheng Zhang and Chunyi Li and Haoning Wu and Xiongkuo Min and Guangtao Zhai and Ning Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gBbYMmDOC3}
} | With the rapid development of generative models, AI-Generated Content (AIGC) has exponentially increased in daily lives. Among them, Text-to-Video (T2V) generation has received widespread attention. Though many T2V models have been released for generating high perceptual quality videos, there is still lack of a method to evaluate the quality of these videos quantitatively. To solve this issue, we establish the largest-scale Text-to-Video Quality Assessment DataBase (T2VQA-DB) to date. The dataset is composed of 10,000 videos generated by 9 different T2V models, along with each video's corresponding mean opinion score. Based on T2VQA-DB, we propose a novel transformer-based model for subjective-aligned Text-to-Video Quality Assessment (T2VQA). The model extracts features from text-video alignment and video fidelity perspectives, then it leverages the ability of a large language model to give the prediction score. Experimental results show that T2VQA outperforms existing T2V metrics and SOTA video quality assessment models. Quantitative analysis indicates that T2VQA is capable of giving subjective-align predictions, validating its effectiveness. The dataset and code will be released upon publication. | Subjective-Aligned Dataset and Metric for Text-to-Video Quality Assessment | [
"Tengchuan Kou",
"Xiaohong Liu",
"Zicheng Zhang",
"Chunyi Li",
"Haoning Wu",
"Xiongkuo Min",
"Guangtao Zhai",
"Ning Liu"
] | Conference | oral | 2403.11956 | [
"https://github.com/qmme/t2vqa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=gABQQUMmV6 | @inproceedings{
kong2024dualbranch,
title={Dual-Branch Fusion with Style Modulation for Cross-Domain Few-Shot Semantic Segmentation},
author={Qiuyu Kong and Jiangming Chen and Jiang Jie and Zanxi Ruan and KANG Lai},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=gABQQUMmV6}
} | Cross-Domain Few-Shot Semantic Segmentation (CD-FSS) aims to achieve pixel-level segmentation of novel categories across various domains by transferring knowledge from the source domain leveraging limited samples.
The main challenge in CD-FSS is bridging the inter-domain gap and addressing the scarcity of labeled samples in the target domain to enhance both generalization and discriminative abilities.
Current methods usually resort to additional networks and complex strategy to embrace domain variability, which inevitably increases the training costs.
This paper proposes a Dual-Branch Fusion with Style Modulation (DFSM) method to tackle this issues.
We specifically deploy a parameter-free Grouped Style Modulation (GSM) layer that captures and adjusts a wide spectrum of potential feature distribution changes, thus improving the model's solution efficiency.
Additionally, to overcome data limitations and enhance adaptability in the target domain, we develope a Dual-Branch Fusion (DBF) strategy which achieves accurate pixel-level prediction results by combining predicted probability maps through weighted fusion, thereby enhancing the discriminative ability of the model.
We evaluate the proposed method on multiple widely-used benchmark datasets, including FSS-1000, ISIC, Chest X-Ray, and Deepglobe, and demonstrate superior performance compared to state-of-the-art methods in CD-FSS tasks. | Dual-Branch Fusion with Style Modulation for Cross-Domain Few-Shot Semantic Segmentation | [
"Qiuyu Kong",
"Jiangming Chen",
"Jiang Jie",
"Zanxi Ruan",
"KANG Lai"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=g7zkmttvJp | @inproceedings{
yang2024a,
title={A Multilevel Guidance-Exploration Network and Behavior-Scene Matching Method for Human Behavior Anomaly Detection},
author={Guoqing Yang and Zhiming Luo and Jianzhe Gao and Yingxin Lai and Kun Yang and Yifan He and Shaozi Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=g7zkmttvJp}
} | Human behavior anomaly detection aims to identify unusual human actions, playing a crucial role in intelligent surveillance and other areas. The current mainstream methods still adopt reconstruction or future frame prediction techniques. However, reconstructing or predicting low-level pixel features easily enables the network to achieve overly strong generalization ability, allowing anomalies to be reconstructed or predicted as effectively as normal data. Different from their methods, inspired by the Student-Teacher Network, we propose a novel framework called the Multilevel Guidance-Exploration Network (MGENet), which detects anomalies through the difference in high-level representation between the Guidance and Exploration network. Specifically, we first utilize the Normalizing Flow that takes skeletal keypoints as input to guide an RGB encoder, which takes unmasked RGB frames as input, to explore latent motion features. Then, the RGB encoder guides the mask encoder, which takes masked RGB frames as input, to explore the latent appearance feature. Additionally, we design a Behavior-Scene Matching Module (BSMM) to detect scene-related behavioral anomalies. Extensive experiments demonstrate that our proposed method achieves state-of-the-art performance on ShanghaiTech and UBnormal datasets, with AUC of 86.9 % and 74.3 %, respectively. The code is available on GitHub. | A Multilevel Guidance-Exploration Network and Behavior-Scene Matching Method for Human Behavior Anomaly Detection | [
"Guoqing Yang",
"Zhiming Luo",
"Jianzhe Gao",
"Yingxin Lai",
"Kun Yang",
"Yifan He",
"Shaozi Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=g6pQLCjg9Z | @inproceedings{
tonini2024algtd,
title={{AL}-{GTD}: Deep Active Learning for Gaze Target Detection},
author={Francesco Tonini and Nicola Dall'Asen and Lorenzo Vaquero and Cigdem Beyan and Elisa Ricci},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=g6pQLCjg9Z}
} | Gaze target detection aims at determining the image location where a person is looking.
While existing studies have made significant progress in this area by regressing accurate gaze heatmaps, these achievements have largely relied on access to extensive labeled datasets, which demands substantial human labor.
In this paper, our goal is to reduce the reliance on the size of labeled training data for gaze target detection. To achieve this, we propose AL-GTD, an innovative approach that integrates supervised and self-supervised losses within a novel sample acquisition function to perform active learning (AL).
Additionally, it utilizes pseudo-labeling to mitigate distribution shifts during the training phase. AL-GTD achieves the best of all AUC results by utilizing only 40-50% of the training data, in contrast to state-of-the-art (SOTA) gaze target detectors requiring the entire training dataset to achieve the same performance.
Importantly, AL-GTD quickly reaches satisfactory performance with 10-20% of the training data, showing the effectiveness of our acquisition function, which is able to acquire the most informative samples.
We provide a comprehensive experimental analysis by adapting several AL methods for the task. AL-GTD outperforms AL competitors, simultaneously exhibiting superior performance compared to SOTA gaze target detectors when all are trained within a low-data regime.
Code is available at https://github.com/francescotonini/al-gtd. | AL-GTD: Deep Active Learning for Gaze Target Detection | [
"Francesco Tonini",
"Nicola Dall'Asen",
"Lorenzo Vaquero",
"Cigdem Beyan",
"Elisa Ricci"
] | Conference | poster | 2409.18561 | [
"https://github.com/francescotonini/al-gtd"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=g4hMFNvhUs | @inproceedings{
ai2024skipvsr,
title={Skip{VSR}: Adaptive Patch Routing for Video Super-Resolution with Inter-Frame Mask},
author={zekun Ai and Xiaotong Luo and Yanyun Qu and Yuan Xie},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=g4hMFNvhUs}
} | Deep neural networks have revealed enormous potential in video super-resolution (VSR), yet the expensive computational expense limits their deployment on resource-limited devices and actual scenarios, especially for restoring multiple frames simultaneously. Existing VSR models contain considerable redundant filter, which drag down the inference efficiency. To accelerate the inference of VSR models, we propose a scalable method based on adaptive patch routing to achieve more practical speedup. Specifically, we design a confidence estimator to predict the aggregation performance of each block for adjacent patch information, which learns to dynamically perform block skipping, i.e., choose which basic blocks of a VSR network to execute during inference so as to reduce total computation to the maximum extent without degrading reconstruction accuracy dramatically. However, we observe that skipping error would be amplified as the hidden states propagate along with recurrent networks. To alleviate the issue, we design Temporal feature distillation to guarantee the performance. This proposal essentially proposes an adaptive routing scheme for each patch. Extensive experiments demonstrate that our method can not only accelerate inference but also provide strong quantitative and qualitative results with the learned strategies. Built upon an BasicVSR model, our method achieves a speedup of 20% on average, going as high as 50% for some images, while even maintaining competitive performance on REDS4. | SkipVSR: Adaptive Patch Routing for Video Super-Resolution with Inter-Frame Mask | [
"zekun Ai",
"Xiaotong Luo",
"Yanyun Qu",
"Yuan Xie"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=g1k07DCgCi | @inproceedings{
wang2024mdr,
title={{MDR}: Multi-stage Decoupled Relational Knowledge Distillation with Adaptive Stage Selection},
author={JiaQi Wang and Lu Lu and Mingmin Chi and Jian Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=g1k07DCgCi}
} | The effectiveness of contrastive-learning-based Knowledge Distillation (KD) has sparked renewed interest in relational distillation, but these methods typically focus on angle-wise information from the penultimate layer. We show that exploiting relational information derived from intermediate layers further improves the effectiveness of distillation. We also find that adding distance-wise relational information to contrastive-learning-based methods negatively impacts distillation quality, revealing an implicit contention between angle-wise and distance-wise attributes.
Therefore, we propose a ${\bf{M}}$ulti-stage ${\bf{D}}$ecoupled ${\bf{R}}$elational (MDR) KD framework equipped with an adaptive stage selection to identify the stages that maximize the efficacy of transferring the relational knowledge.
MDR framework decouples angle-wise and distance-wise information to resolve their conflicts while still preserving complete relational knowledge, thereby resulting in an elevated transferring efficiency and distillation quality.
To evaluate the proposed method, we conduct extensive experiments on multiple image benchmarks ($\textit{i.e.}$ CIFAR100, ImageNet and Pascal VOC), covering various tasks
($\textit{i.e.}$ classification, few-shot learning, transfer learning and object detection).
Our method exhibits superior performance under diverse scenarios, surpassing the state of the art by an average improvement of 1.22\% on CIFAR-100 across extensively utilized teacher-student network pairs. | MDR: Multi-stage Decoupled Relational Knowledge Distillation with Adaptive Stage Selection | [
"JiaQi Wang",
"Lu Lu",
"Mingmin Chi",
"Jian Chen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=fylmVagDag | @inproceedings{
liu2024learning,
title={Learning Exposure Correction in Dynamic Scenes},
author={Jin Liu and Bo Wang and Chuanming Wang and Huiyuan Fu and Huadong Ma},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=fylmVagDag}
} | Exposure correction aims to enhance visual data suffering from improper exposures, which can greatly improve satisfactory visual effects. However, previous methods mainly focus on the image modality, and the video counterpart is less explored in the literature. Directly applying prior image-based methods to videos results in temporal incoherence with low visual quality. Through thorough investigation, we find that the development of relevant communities is limited by the absence of a benchmark dataset. Therefore, in this paper, we construct the first real-world paired video dataset, including both underexposure and overexposure dynamic scenes. To achieve spatial alignment, we utilize two DSLR cameras and a beam splitter to simultaneously capture improper and normal exposure videos. Additionally, we propose an end-to-end Video Exposure Correction Network (VECNet), in which a dual-stream module is designed to deal with both underexposure and overexposure factors, enhancing the illumination based on Retinex theory. Experimental results based on various metrics and user studies demonstrate the significance of our dataset and the effectiveness of our method. The code and dataset will be available soon. | Learning Exposure Correction in Dynamic Scenes | [
"Jin Liu",
"Bo Wang",
"Chuanming Wang",
"Huiyuan Fu",
"Huadong Ma"
] | Conference | oral | 2402.17296 | [
"https://github.com/kravrolens/vecnet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=fuV2VIFNbe | @inproceedings{
qianxinhuang2024similarity,
title={Similarity Preserving Transformer Cross-Modal Hashing for Video-Text Retrieval},
author={qianxinhuang and Siyao Peng and Xiaobo Shen and Yun-Hao Yuan and Shirui Pan},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=fuV2VIFNbe}
} | As social networks grow exponentially, there is an increasing demand for video retrieval using natural language. Cross-modal hashing that encodes multi-modal data using compact hash code has be widely used in large-scale image-text retrieval, primarily due to its computation and storage efficiency. When applied to video-text retrieval, existing unsupervised cross-modal hashing extracts the frame- or word-level features individually, and thus ignores long-term dependencies. In addition, effective exploit of multi-modal structure poses a significant challenge due to intricate nature of video and text. To address the above issues, we propose Similarity Preserving Transformer Cross-Modal Hashing (SPTCH), a new unsupervised deep cross-modal hashing method for video-text retrieval. SPTCH encodes video and text by bidirectional Transformer encoder that exploits their long-term dependencies. SPTCH constructs a multi-modal collaborative graph to model correlations among multi-modal data, and applies semantic aggregation by employing Graph Convolutional Network (GCN) on such graph. SPTCH designs unsupervised multi-modal contrastive loss and neighborhood reconstruction loss to effectively exploit inter- and intra-modal similarity structure among videos and texts. The empirical results on three video benchmark datasets demonstrate that the proposed SPTCH generally outperforms state-of-the-arts in video-text retrieval. | Similarity Preserving Transformer Cross-Modal Hashing for Video-Text Retrieval | [
"qianxinhuang",
"Siyao Peng",
"Xiaobo Shen",
"Yun-Hao Yuan",
"Shirui Pan"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=fslSfODqFU | @inproceedings{
zheng2024a,
title={A Picture Is Worth a Graph: A Blueprint Debate Paradigm for Multimodal Reasoning},
author={Changmeng Zheng and DaYong Liang and Wengyu Zhang and Xiaoyong Wei and Tat-Seng Chua and Qing Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=fslSfODqFU}
} | This paper presents a pilot study aimed at introducing multi-agent debate into multimodal reasoning. The study addresses two key challenges: the trivialization of opinions resulting from excessive summarization and the diversion of focus caused by distractor concepts introduced from images. These challenges stem from the inductive (bottom-up) nature of existing debating schemes. To address the issue, we propose a deductive (top-down) debating approach called Blueprint Debate on Graphs (BDoG). In BDoG, debates are confined to a blueprint graph to prevent opinion trivialization through world-level summarization. Moreover, by storing evidence in branches within the graph, BDoG mitigates distractions caused by frequent but irrelevant concepts. Extensive experiments validate that BDoG is able to achieve state-of-the-art results in ScienceQA and MMBench with significant improvements over previous methods. The source code can be accessed at https://github.com/open_upon_acceptance. | A Picture Is Worth a Graph: A Blueprint Debate Paradigm for Multimodal Reasoning | [
"Changmeng Zheng",
"DaYong Liang",
"Wengyu Zhang",
"Xiaoyong Wei",
"Tat-Seng Chua",
"Qing Li"
] | Conference | oral | 2403.14972 | [
"https://github.com/thecharm/bdog"
] | https://huggingface.co/papers/2403.14972 | 0 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=fsahbDPiSe | @inproceedings{
xie2024uncertaintyaware,
title={Uncertainty-Aware Pseudo-Labeling and Dual Graph Driven Network for Incomplete Multi-View Multi-Label Classification},
author={Wulin Xie and Xiaohuan Lu and Yadong Liu and Jiang Long and Bob Zhang and Shuping Zhao and Jie Wen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=fsahbDPiSe}
} | Multi-view multi-label classification has recently received extensive attention due to its wide-ranging applications across various fields, such as medical imaging and bioinformatics. However, views and labels are usually incomplete in practical scenarios, attributed to the uncertainties in data collection and manual labeling. To cope with this issue, we propose an uncertainty-aware pseudo-labeling and dual graph driven network (UPDGD-Net), which can fully leverage the supervised information of the available labels and feature information of available views. Different from the existing works, we leverage the label matrix to impose dual graph constraints on the embedded features of both view-level and label-level, which enables the method to maintain the inherent structure of the real data during the feature extraction stage. Furthermore, our network incorporates an uncertainty-aware pseudo-labeling strategy to fill the missing labels, which not only addresses the learning issue of incomplete multi-labels but also enables the method to explore more supervised information to guide the network training. Extensive experiments on five datasets demonstrate that our method outperforms other state-of-the-art methods. | Uncertainty-Aware Pseudo-Labeling and Dual Graph Driven Network for Incomplete Multi-View Multi-Label Classification | [
"Wulin Xie",
"Xiaohuan Lu",
"Yadong Liu",
"Jiang Long",
"Bob Zhang",
"Shuping Zhao",
"Jie Wen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=fodBlwAFMZ | @inproceedings{
yang2024feddeo,
title={Fed{DEO}: Description-Enhanced One-Shot Federated Learning with Diffusion Models},
author={Mingzhao Yang and Shangchao Su and Bin Li and Xiangyang Xue},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=fodBlwAFMZ}
} | In recent years, the attention towards One-Shot Federated Learning (OSFL) has been driven by its capacity to minimize communication. With the development of the diffusion model (DM), several methods employ the DM for OSFL, utilizing model parameters, image features, or textual prompts as mediums to transfer the local client knowledge to the server. However, these mediums often require public datasets or the uniform feature extractor, significantly limiting their practicality. In this paper, we propose FedDEO, a Description-Enhanced One-Shot Federated Learning Method with DMs, offering a novel exploration of utilizing the DM in OSFL. The core idea of our method involves training local descriptions on the clients, serving as the medium to transfer the knowledge of the distributed clients to the server. Firstly, we train local descriptions on the client data to capture the characteristics of client distributions, which are then uploaded to the server. On the server, the descriptions are used as conditions to guide the DM in generating synthetic datasets that comply with the distributions of various clients, enabling the training of the aggregated model. Theoretical analyses and sufficient quantitation and visualization experiments on three large-scale real-world datasets demonstrate that through the training of local descriptions, the server is capable of generating synthetic datasets with high quality and diversity. Consequently, with advantages in communication and privacy protection, the aggregated model outperforms compared FL or diffusion-based OSFL methods and, on some clients, outperforms the performance ceiling of centralized training. | FedDEO: Description-Enhanced One-Shot Federated Learning with Diffusion Models | [
"Mingzhao Yang",
"Shangchao Su",
"Bin Li",
"Xiangyang Xue"
] | Conference | poster | 2407.19953 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=fmf8605lKj | @inproceedings{
ye2024query,
title={Query Augmentation with Brain Signals},
author={Ziyi Ye and Jingtao Zhan and Qingyao Ai and Yiqun LIU and Maarten de Rijke and Christina Lioma and Tuukka Ruotsalo},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=fmf8605lKj}
} | In the information retrieval scenario, query augmentation is an essential technique to refine semantically imprecise queries to align closely with users' actual information needs.
Traditional methods typically rely on extracting signals from user interactions such as browsing or clicking behaviors to augment the queries, which may not accurately reflect the actual user intent due to inherent noise and the dependency on initial user interactions.
To overcome these limitations, we introduce Brain-Aug, a novel approach that decodes semantic information directly from brain signals of users to augment query representation.
Brain-Aug explores three-fold techniques:
(1) Structurally, an adapter network is utilized to project brain signals into the embedding space of a language model, allowing query augmentation conditioned on both the users' initial query and their brain signals.
(2) During training, we use a next token prediction task for query augmentation and adopt prompt tuning to efficiently train the brain adapter.
(3) At the inference stage, a ranking-oriented decoding strategy is implemented, enabling Brain-Aug to generate augmentations that improve ranking performance.
We evaluate our approach on multiple functional magnetic resonance imaging (fMRI) datasets, demonstrating that Brain-Aug not only produces semantically richer queries but also significantly improves document ranking accuracy, particularly for ambiguous queries.
These results validate the effectiveness of our proposed Brain-Aug approach, and reveal the great potential of leveraging internal cognitive states to understand and augment text-based queries. | Query Augmentation with Brain Signals | [
"Ziyi Ye",
"Jingtao Zhan",
"Qingyao Ai",
"Yiqun LIU",
"Maarten de Rijke",
"Christina Lioma",
"Tuukka Ruotsalo"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=fheUSqLh7J | @inproceedings{
zhang2024reversecomplete,
title={Reverse2Complete: Unpaired Multimodal Point Cloud Completion via Guided Diffusion},
author={Wenxiao Zhang and Hossein Rahmani and Xun Yang and Jun Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=fheUSqLh7J}
} | Unpaired point cloud completion involves filling in missing parts of a point cloud without requiring partial-complete correspondence. Meanwhile, since point cloud completion is an ill-posed problem, there are multiple ways to generate the missing parts. Existing GAN-based methods transform partial shape encoding into a complete one in the low-dimensional latent feature space. However, “mode collapse” often occurs, where only a subset of the shapes is represented in the low-dimensional space, reducing the diversity of the generated shapes. In this paper, we propose a novel unpaired multimodal shape completion approach that directly operates on point coordinate space. We achieve unpaired completion via an unconditional diffusion model trained on complete data by “hijacking” the generative process. We further augment the diffusion model by introducing two guidance mechanisms to help map the partial point cloud to the complete one while preserving its original structure. We conduct extensive evaluations of our approach, which show that our method generates shapes that are more diverse and better preserve the original structures compared to alternative methods. | Reverse2Complete: Unpaired Multimodal Point Cloud Completion via Guided Diffusion | [
"Wenxiao Zhang",
"Hossein Rahmani",
"Xun Yang",
"Jun Liu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=fgy59cM8X6 | @inproceedings{
zhao2024reportconcept,
title={Report-Concept Textual-Prompt Learning for Enhancing X-ray Diagnosis},
author={Xiongjun Zhao and Zheng-Yu Liu and Fen Liu and Guanting Li and Yutao Dou and Shaoliang Peng},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=fgy59cM8X6}
} | Despite significant advances in image-text medical visual language modeling, the high cost of fine-grained annotation of images to align radiology reports has led current approaches to focus primarily on semantic alignment between the image and the full report, neglecting the critical diagnostic information contained in the text. This is insufficient in medical scenarios demanding high explainability. To address this problem, in this paper, we introduce radiology reports as images in prompt learning. Specifically, we extract key clinical concepts, lesion locations, and positive labels from easily accessible radiology reports and combine them with an external medical knowledge base to form fine-grained self-supervised signals. Moreover, we propose a novel Report-Concept Textual-Prompt Learning (RC-TPL), which aligns radiology reports at multiple levels. In the inference phase, report-level and concept-level prompts provide rich global and local semantic understanding for X-ray images. Extensive experiments on X-ray image datasets demonstrate the superior performance of our approach with respect to various baselines, especially in the presence of scarce imaging data. Our study not only significantly improves the accuracy of data-constrained medical X-ray diagnosis, but also demonstrates how the integration of domain-specific conceptual knowledge can enhance the explainability of medical image analysis. The implementation code will be publicly available. | Report-Concept Textual-Prompt Learning for Enhancing X-ray Diagnosis | [
"Xiongjun Zhao",
"Zheng-Yu Liu",
"Fen Liu",
"Guanting Li",
"Yutao Dou",
"Shaoliang Peng"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=fdsNpmPJ27 | @inproceedings{
sun2024embodied,
title={Embodied Laser Attack:Leveraging Scene Priors to Achieve Agent-based Robust Non-contact Attacks},
author={Yitong Sun and Yao Huang and Xingxing Wei},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=fdsNpmPJ27}
} | As physical adversarial attacks become extensively applied in unearthing the potential risk of security-critical scenarios, especially in dynamic scenarios, their vulnerability to environmental variations has also been brought to light. The non-robust nature of physical adversarial attack methods brings less-than-stable performance consequently.
Although methods such as Expectation over Transformation (EOT) have enhanced the robustness of traditional contact attacks like adversarial patches, they fall short in practicality and concealment within dynamic environments such as traffic scenarios. Meanwhile, non-contact laser attacks, while offering enhanced adaptability, face constraints due to a limited optimization space for their attributes, rendering EOT less effective. This limitation underscores the necessity for developing a new strategy to augment the robustness of such practices. To address these issues, this paper introduces the Embodied Laser Attack (ELA), a novel framework that leverages the embodied intelligence paradigm of Perception-Decision-Control to dynamically tailor non-contact laser attacks. For the perception module, given the challenge of simulating the victim's view by full-image transformation, ELA has innovatively developed a local perspective transformation network, based on the intrinsic prior knowledge of traffic scenes and enables effective and efficient estimation. For the decision and control module, ELA trains an attack agent with data-driven reinforcement learning instead of adopting time-consuming heuristic algorithms, making it capable of instantaneously determining a valid attack strategy with the perceived information by well-designed rewards, which is then conducted by a controllable laser emitter. Experimentally, we apply our framework to diverse traffic scenarios both in the digital and physical world, verifying the effectiveness of our method under dynamic successive scenes. | Embodied Laser Attack:Leveraging Scene Priors to Achieve Agent-based Robust Non-contact Attacks | [
"Yitong Sun",
"Yao Huang",
"Xingxing Wei"
] | Conference | poster | 2312.09554 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=fdqTjYDN6T | @inproceedings{
xia2024advancing,
title={Advancing Generalized Deepfake Detector with Forgery Perception Guidance},
author={Ruiyang Xia and Dawei Zhou and Decheng Liu and Lin Yuan and Shuodi Wang and Jie Li and Nannan Wang and Xinbo Gao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=fdqTjYDN6T}
} | One of the serious impacts brought by artificial intelligence is the abuse of deepfake techniques. Despite the proliferation of deepfake detection methods aimed at safeguarding the authenticity of media across the Internet, they mainly consider the improvement of detector architecture or the synthesis of forgery samples. The forgery perceptions, including the feature responses and prediction scores for forgery samples, have not been well considered. As a result, the generalization across multiple deepfake techniques always comes with complicated detector structures and expensive training costs. In this paper, we shift the focus to real-time perception analysis in the training process and generalize deepfake detectors through an efficient method dubbed Forgery Perception Guidance (FPG). In particular, after investigating the deficiencies of forgery perceptions, FPG adopts a sample refinement strategy to pertinently train the detector, thereby elevating the generalization efficiently. Moreover, FPG introduces more sample information as explicit optimizations, which makes the detector further adapt the sample diversities. Experiments demonstrate that FPG improves the generality of deepfake detectors with small training costs, minor detector modifications, and the acquirement of real data only. In particular, our approach not only outperforms the state-of-the-art on both the cross-dataset and cross-manipulation evaluation but also surpasses the baseline that needs more than 3$\times$ training time. Code is available in the supplementary material. | Advancing Generalized Deepfake Detector with Forgery Perception Guidance | [
"Ruiyang Xia",
"Dawei Zhou",
"Decheng Liu",
"Lin Yuan",
"Shuodi Wang",
"Jie Li",
"Nannan Wang",
"Xinbo Gao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=fdAIgCCHXz | @inproceedings{
huang2024aesexpert,
title={AesExpert: Towards Multi-modality Foundation Model for Image Aesthetics Perception},
author={Yipo Huang and Xiangfei Sheng and Zhichao Yang and Quan Yuan and Zhichao Duan and Pengfei Chen and Leida Li and Weisi Lin and Guangming Shi},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=fdAIgCCHXz}
} | The highly abstract nature of image aesthetics perception (IAP) poses significant challenge for current multimodal large language models (MLLMs). The lack of human-annotated multi-modality aesthetic data further exacerbates this dilemma, resulting in MLLMs falling short of aesthetics perception capabilities. To address the above challenge, we first introduce a comprehensively annotated Aesthetic Multi-Modality Instruction Tuning (AesMMIT) dataset, which serves as the footstone for building multi-modality aesthetics foundation models. Specifically, to align MLLMs with human aesthetics perception, we construct a corpus-rich aesthetic critique database with 21,904 diverse-sourced images and 88K human natural language feedbacks, which are collected via progressive questions, ranging from coarse-grained aesthetic grades to fine-grained aesthetic descriptions. To ensure that MLLMs can handle diverse queries, we further prompt GPT to refine the aesthetic critiques and assemble the large-scale aesthetic instruction tuning dataset, i.e. AesMMIT, which consists of 409K multi-typed instructions to activate stronger aesthetic capabilities. Based on the AesMMIT database, we fine-tune the open-sourced general foundation models, achieving multi-modality Aesthetic Expert models, dubbed AesExpert. Extensive experiments demonstrate that the proposed AesExpert models deliver significantly better aesthetic perception performances than the state-of-the-art MLLMs, including the most advanced GPT-4V and Gemini-Pro-Vision. The dataset, code and models will be made publicly available. | AesExpert: Towards Multi-modality Foundation Model for Image Aesthetics Perception | [
"Yipo Huang",
"Xiangfei Sheng",
"Zhichao Yang",
"Quan Yuan",
"Zhichao Duan",
"Pengfei Chen",
"Leida Li",
"Weisi Lin",
"Guangming Shi"
] | Conference | poster | 2404.09624 | [
"https://github.com/yipoh/aesexpert"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=fSmoBjDb2W | @inproceedings{
hou2024dig,
title={Dig into Detailed Structures: Key Context Encoding and Semantic-based Decoding for Point Cloud Completion},
author={Hongye Hou and Xuehao Gao and Zhan Liu and Yang Yang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=fSmoBjDb2W}
} | Recovering the complete shape of a 3D object from limited viewpoints plays an important role in 3D vision. Encouraged by the effectiveness of feature extraction using deep neural networks, recent point cloud completion methods prefer an encoding-decoding architecture for generating the global structure and local geometry from a set of input point proxies. In this paper, we introduce an innovative completion method aimed at uncovering structural details from input point clouds and maximizing their utility. Specifically, we improve both Encoding and Decoding for this task: (1) Key Context Fusion Encoding extracts and aggregates homologous key context by adaptively increasing the sampling bias towards salient structure and special contour points that are more representative of object structure information. (2) Semantic-based Decoding introduces a semantic EdgeConv module to prompt next Transformer decoder, which effectively learns and generates local geometry with semantic correlations from non-nearest neighbors. The experiments are evaluated on several 3D point cloud and 2.5D depth image datasets. Both qualitative and quantitative evaluations demonstrate that our method outperforms previous state-of-the-art methods. | Dig into Detailed Structures: Key Context Encoding and Semantic-based Decoding for Point Cloud Completion | [
"Hongye Hou",
"Xuehao Gao",
"Zhan Liu",
"Yang Yang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |