bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 215
445
| abstract
stringlengths 820
2.37k
| title
stringlengths 24
147
| authors
sequencelengths 1
13
⌀ | id
stringclasses 1
value | type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 33
values | n_linked_authors
int64 -1
4
| upvotes
int64 -1
21
| num_comments
int64 -1
4
| n_authors
int64 -1
11
| Models
sequencelengths 0
1
| Datasets
sequencelengths 0
1
| Spaces
sequencelengths 0
4
| old_Models
sequencelengths 0
1
| old_Datasets
sequencelengths 0
1
| old_Spaces
sequencelengths 0
4
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=zpo9TpUXuU | @inproceedings{
liu2024physreaction,
title={PhysReaction: Physically Plausible Real-Time Humanoid Reaction Synthesis via Forward Dynamics Guided 4D Imitation},
author={Yunze Liu and Changxi Chen and Chenjing Ding and Li Yi},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=zpo9TpUXuU}
} | Humanoid Reaction Synthesis is pivotal for creating highly interactive and empathetic robots that can seamlessly integrate into human environments, enhancing the way we live, work, and communicate. However, it is difficult to learn the diverse interaction patterns of multiple humans and generate physically plausible reactions. Currently, the predominant approaches involve kinematics-based and physics-based methods. The kinematic-based methods lack physical prior limiting their capacity to generate convincingly realistic motions. The physics-based method often relies on kinematics-based methods to generate reference states, which struggle with the challenges posed by kinematic noise during action execution. Moreover, these methods are unable to achieve real-time inference constrained by their reliance on diffusion models. In this work, we propose a Forward Dynamics Guided 4D Imitation method to generate physically plausible human-like reactions. The learned policy is capable of generating physically plausible and human-like reactions in real-time, significantly improving the speed(x33) for inference and quality of reactions compared with the existing methods. Our experiments on the InterHuman and Chi3D datasets, along with ablation studies, demonstrate the effectiveness of our approach. More visualizations are available in supplementary materials. | PhysReaction: Physically Plausible Real-Time Humanoid Reaction Synthesis via Forward Dynamics Guided 4D Imitation | [
"Yunze Liu",
"Changxi Chen",
"Chenjing Ding",
"Li Yi"
] | Conference | oral | 2404.01081 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=zoMT1Czqv3 | @inproceedings{
he2024domain,
title={Domain Generalization-Aware Uncertainty Introspective Learning for 3D Point Clouds Segmentation},
author={Pei He and Licheng Jiao and Lingling Li and Xu Liu and Fang Liu and Wenping Ma and Shuyuan Yang and Ronghua Shang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=zoMT1Czqv3}
} | Domain generalization 3D segmentation aims to learn the point clouds with unknown distributions. Feature augmentation has been proven to be effective for domain generalization. However, each point of the 3D segmentation scene contains uncertainty in the target domain, which affects model generalization. This paper proposes the Domain Generalization-Aware Uncertainty Introspective Learning (DGUIL) method, including Potential Uncertainty Modeling (PUM) and Momentum Introspective Learning (MIL), to deal with the point uncertainty in domain shift. Specifically, PUM explores the underlying uncertain point cloud features and generates the different distributions for each point. The PUM enhances the point features over an adaptive range, which provides various information for simulating the distribution of the target domain. Then, MIL is designed to learn generalized feature representation in uncertain distributions. The MIL utilizes uncertainty correlation representation to measure the predicted divergence of knowledge accumulation, which learns to carefully judge and understand divergence through uncertainty introspection loss. Finally, extensive experiments verify the advantages of the proposed method over current state-of-the-art methods. The code will be available. | Domain Generalization-Aware Uncertainty Introspective Learning for 3D Point Clouds Segmentation | [
"Pei He",
"Licheng Jiao",
"Lingling Li",
"Xu Liu",
"Fang Liu",
"Wenping Ma",
"Shuyuan Yang",
"Ronghua Shang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=zctvc3QQr2 | @inproceedings{
ma2024colore,
title={Color4E: Event Demosaicing for Full-color Event Guided Image Deblurring},
author={Yi Ma and Peiqi Duan and Yuchen Hong and Chu Zhou and Yu Zhang and Jimmy Ren and Boxin Shi},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=zctvc3QQr2}
} | Neuromorphic event sensors are novel visual cameras that feature high-speed illumination-variation sensing and have found widespread application in guiding frame-based imaging enhancement. This paper focuses on color restoration in the event-guided image deblurring task, we fuse blurry images with mosaic color events instead of mono events to avoid artifacts such as color bleeding. The challenges associated with this approach include demosaicing color events for reconstructing full-resolution sampled signals and fusing bimodal signals to achieve image deblurring. To meet these challenges, we propose a novel network called Color4E to enhance the color restoration quality for the image deblurring task. Color4E leverages an event demosaicing module to upsample the spatial resolution of mosaic color events and a cross-encoding image deblurring module for fusing bimodal signals, a refinement module is designed to fuse full-color events and refine initial deblurred images. Furthermore, to avoid the real-simulated gap of events, we implement a display-filter-camera system that enables mosaic and full-color event data captured synchronously, to collect a real-captured dataset used for network training and validation. The results on the public dataset and our collected dataset show that Color4E enables high-quality event-based image deblurring compared to state-of-the-art methods. | Color4E: Event Demosaicing for Full-color Event Guided Image Deblurring | [
"Yi Ma",
"Peiqi Duan",
"Yuchen Hong",
"Chu Zhou",
"Yu Zhang",
"Jimmy Ren",
"Boxin Shi"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=zXpQ50fcOb | @inproceedings{
zhu2024attributedriven,
title={Attribute-Driven Multimodal Hierarchical Prompts for Image Aesthetic Quality Assessment},
author={Hancheng Zhu and Ju Shi and Zhiwen Shao and Rui Yao and Yong Zhou and Jiaqi Zhao and Leida Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=zXpQ50fcOb}
} | Image Aesthetic Quality Assessment (IAQA) aims to simulate users' visual perception to judge the aesthetic quality of images. In social media, users' aesthetic experiences are often reflected in their textual comments regarding the aesthetic attributes of images. To fully explore the attribute information perceived by users for evaluating image aesthetic quality, this paper proposes an image aesthetic quality assessment method based on attribute-driven multimodal hierarchical prompts. Unlike existing IAQA methods that utilize multimodal pre-training or straightforward prompts for model learning, the proposed method leverages attribute comments and quality-level text templates to hierarchically learn the aesthetic attributes and quality of images. Specifically, we first leverage users' aesthetic attribute comments to perform prompt learning on images. The learned attribute-driven multimodal features can comprehensively capture the semantic information of image aesthetic attributes perceived by users. Then, we construct text templates for different aesthetic quality levels to further facilitate prompt learning through semantic information related to the aesthetic quality of images. The proposed method can explicitly simulate users' aesthetic judgment of images to obtain more precise aesthetic quality. Experimental results demonstrate that the proposed IAQA method based on hierarchical prompts outperforms existing methods significantly on multiple IAQA databases. Our source code is provided in the supplementary material, and we will release all source code along with this paper. | Attribute-Driven Multimodal Hierarchical Prompts for Image Aesthetic Quality Assessment | [
"Hancheng Zhu",
"Ju Shi",
"Zhiwen Shao",
"Rui Yao",
"Yong Zhou",
"Jiaqi Zhao",
"Leida Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=zVgZfHRM3g | @inproceedings{
xiao2024asymmetric,
title={Asymmetric Event-Guided Video Super-Resolution},
author={Zeyu Xiao and Dachun Kai and Yueyi Zhang and Xiaoyan Sun and Zhiwei Xiong},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=zVgZfHRM3g}
} | Event cameras are novel bio-inspired cameras that record asynchronous events with high temporal resolution and dynamic range. Leveraging the auxiliary temporal information recorded by event cameras holds great promise for the task of video super-resolution (VSR). However, existing event-guided VSR methods assume that the event and RGB cameras are strictly calibrated (e.g., pixel-level sensor designs in DAVIS 240/346). This assumption proves limiting in emerging high-resolution devices, such as dual-lens smartphones and unmanned aerial vehicles, where such precise calibration is typically unavailable. To unlock more event-guided application scenarios, we propose to perform the task of asymmetric event-guided VSR for the first time, and we propose an Asymmetric Event-guided VSR Network (AsEVSRN) for this new task. AsEVSRN incorporates two specialized designs for leveraging the asymmetric event stream in VSR. Firstly, the content hallucination module dynamically enhances event and RGB information by exploiting their complementary nature, thereby adaptively boosting representational capacity. Secondly, the event-enhanced bidirectional recurrent cells align and propagate temporal features fused with features from content-hallucinated frames. Within the bidirectional recurrent cells, event-enhanced flow is employed for simultaneous utilization and fusion of temporal information at both the feature and pixel levels. Comprehensive experimental results affirm that our method consistently produces superior results both quantitatively and qualitatively. Code will be released. | Asymmetric Event-Guided Video Super-Resolution | [
"Zeyu Xiao",
"Dachun Kai",
"Yueyi Zhang",
"Xiaoyan Sun",
"Zhiwei Xiong"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=zTLZvVdgUt | @inproceedings{
du2024dprae,
title={{DP}-{RAE}: A Dual-Phase Merging Reversible Adversarial Example for Image Privacy Protection},
author={Xia Du and Jiajie Zhu and Jizhe Zhou and Chi-Man Pun and Qizhen Xu and Xiaoyuan Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=zTLZvVdgUt}
} | In digital security, Reversible Adversarial Examples (RAE) blend adversarial attacks with Reversible Data Hiding (RDH) within images to thwart unauthorized access. Traditional RAE methods, however, compromise attack efficiency for the sake of perturbation concealment, diminishing the protective capacity of valuable perturbations and limiting applications to white-box scenarios. This paper proposes a novel Dual-Phase merging Reversible Adversarial Example (DP-RAE) generation framework, combining a heuristic black-box attack and RDH with Grayscale Invariance (RDH-GI) technology. This dual strategy not only evaluates and harnesses the adversarial potential of past perturbations more effectively but also guarantees flawless embedding of perturbation information and complete recovery of the original image. Experimental validation reveals our method's superiority, secured an impressive 96.9\% success rate and 100\% recovery rate in compromising black-box models. In particular, it achieved a 90\% misdirection rate against commercial models under a constrained number of queries. This marks the first successful attempt at targeted black-box reversible adversarial attacks for commercial recognition models. This achievement highlights our framework's capability to enhance security measures without sacrificing attack performance. Moreover, our attack framework is flexible, allowing the interchangeable use of different attack and RDH modules to meet advanced technological requirements. | DP-RAE: A Dual-Phase Merging Reversible Adversarial Example for Image Privacy Protection | [
"Xia Du",
"Jiajie Zhu",
"Jizhe Zhou",
"Chi-Man Pun",
"Qizhen Xu",
"Xiaoyuan Liu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=zQvFY3Mlrk | @inproceedings{
pan2024modelbased,
title={Model-Based Non-Independent Distortion Cost Design for Effective {JPEG} Steganography},
author={Yuanfeng Pan and Wenkang Su and Jiangqun Ni and Qingliang Liu and Yulin Zhang and Donghua Jiang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=zQvFY3Mlrk}
} | Recent achievements have shown that model-based steganographic schemes hold promise for better security than heuristic-based ones, as they can provide theoretical guarantees on secure steganography under a given statistical model. However, it remains a challenge to exploit the correlations between DCT coefficients for secure steganography in practical scenarios where only a single compressed JPEG image is available. To cope with this, we propose a novel model-based steganographic scheme using the Conditional Random Field (CRF) model with four-element cross-neighborhood to capture the dependencies among DCT coefficients for JPEG steganography with symmetric embedding. Specifically, the proposed CRF model is characterized by the delicately designed energy function, which is defined as the weighted sum of a series of unary and pairwise potentials, where the potentials associated with the statistical detectability of steganography are formulated as the KL divergence between the statistical distributions of cover and stego. By optimizing the constructed energy function with the given payload constraint, the non-independent distortion cost corresponding to the least detectability can be accordingly obtained. Extensive experimental results validate the effectiveness of our proposed scheme, especially outperforming the previous independent art J-MiPOD. | Model-Based Non-Independent Distortion Cost Design for Effective JPEG Steganography | [
"Yuanfeng Pan",
"Wenkang Su",
"Jiangqun Ni",
"Qingliang Liu",
"Yulin Zhang",
"Donghua Jiang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=zGoCaP7NyR | @inproceedings{
yue2024mmal,
title={{MMAL}: Multi-Modal Analytic Learning for Exemplar-Free Audio-Visual Class Incremental Tasks},
author={Xianghu Yue and Xueyi Zhang and Yiming Chen and Chengwei Zhang and Mingrui Lao and Huiping Zhuang and Xinyuan Qian and Haizhou Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=zGoCaP7NyR}
} | Class-incremental learning poses a significant challenge under an exemplar-free constraint, leading to catastrophic forgetting and sub-par incremental accuracy. Previous attempts have focused primarily on single-modality tasks, such as image classification or audio event classification. However, in the context of Audio-Visual Class-Incremental Learning (AVCIL), the effective integration and utilization of heterogeneous modalities, with their complementary and enhancing characteristics, remains largely unexplored. To bridge this gap, we propose the Multi-Modal Analytic Learning (MMAL) framework, an exemplar-free solution for AVCIL that employs a closed-form, linear approach. To be specific, MMAL introduces a modality fusion module that re-formulates the AVCIL problem through a Recursive Least-Square (RLS) perspective. Complementing this, a Modality-Specific Knowledge Compensation (MSKC) module is designed to further alleviate the under-fitting limitation intrinsic to analytic learning by harnessing individual knowledge from audio and visual modality in tandem. Comprehensive experimental comparisons with existing methods show that our proposed MMAL demonstrates superior performance with the accuracy of 76.71%, 78.98% and 76.19% on AVE, Kinetics-Sounds and VGGSounds100 datasets, respectively, setting new state-of-the-art AVCIL performance. Notably, compared to those memory-based methods, our MMAL, being an exemplar-free approach, provides good data privacy and can better leverage multi-modal information for improved incremental accuracy. | MMAL: Multi-Modal Analytic Learning for Exemplar-Free Audio-Visual Class Incremental Tasks | [
"Xianghu Yue",
"Xueyi Zhang",
"Yiming Chen",
"Chengwei Zhang",
"Mingrui Lao",
"Huiping Zhuang",
"Xinyuan Qian",
"Haizhou Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=zFtyfdfNky | @inproceedings{
xu2024generating,
title={Generating Multimodal Metaphorical Features for Meme Understanding},
author={Bo Xu and Junzhe Zheng and Jiayuan He and Yuxuan Sun and Hongfei Lin and Liang Zhao and Feng Xia},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=zFtyfdfNky}
} | Understanding a meme is a challenging task, due to the metaphorical information contained in the meme that requires intricate interpretation to grasp its intended meaning fully. In previous works, attempts have been made to facilitate computational understanding of memes through introducing human-annotated metaphors as extra input features into machine learning models. However, these approaches mainly focus on formulating linguistic representation of a metaphor (extracted from the texts appearing in memes), while ignoring the connection between the metaphor and corresponding visual features (e.g., objects in meme images). In this paper, we argue that a more comprehensive understanding of memes can only be achieved through a joint modelling of both visual and linguistic features of memes. To this end, we propose an approach to generate Multimodal Metaphorical feature for Meme Classification, named MMMC. MMMC derives visual characteristics from linguistic attributes of metaphorical concepts, which more effectively convey the underlying metaphorical concept, leveraging a text-conditioned generative adversarial network. The linguistic and visual features are then integrated into a set of multimodal metaphorical features for classification purpose. We perform extensive experiments on a benchmark metaphorical meme dataset, MET-Meme. Experimental results show that MMMC significantly outperforms existing baselines on the task of emotion classification and intention detection. Our code and dataset are available at https://anonymous.4open.science/r/MMMC-C37B. | Generating Multimodal Metaphorical Features for Meme Understanding | [
"Bo Xu",
"Junzhe Zheng",
"Jiayuan He",
"Yuxuan Sun",
"Hongfei Lin",
"Liang Zhao",
"Feng Xia"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=zD9m7YE8Gj | @inproceedings{
wang2024sampling,
title={Sampling to Distill: Knowledge Transfer from Open-World Data},
author={Yuzheng Wang and Zhaoyu Chen and Jie Zhang and Dingkang Yang and Zuhao Ge and Yang Liu and Siao Liu and Yunquan Sun and Wenqiang Zhang and Lizhe Qi},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=zD9m7YE8Gj}
} | Data-Free Knowledge Distillation (DFKD) is a novel task that aims to train high-performance student models using only the pre-trained teacher network without original training data. Most of the existing DFKD methods rely heavily on additional generation modules to synthesize the substitution data resulting in high computational costs and ignoring the massive amounts of easily accessible, low-cost, unlabeled open-world data. Meanwhile, existing methods ignore the domain shift issue between the substitution data and the original data, resulting in knowledge from teachers not always trustworthy and structured knowledge from data becoming a crucial supplement. To tackle the issue, we propose a novel Open-world Data Sampling Distillation (ODSD) method for the DFKD task without the redundant generation process. First, we try to sample open-world data close to the original data's distribution by an adaptive sampling module and introduce a low-noise representation to alleviate the domain shift issue. Then, we build structured relationships of multiple data examples to exploit data knowledge through the student model itself and the teacher's structured representation. Extensive experiments on CIFAR-10, CIFAR-100, NYUv2, and ImageNet show that our ODSD method achieves state-of-the-art performance with lower FLOPs and parameters. Especially, we improve 1.50\%-9.59\% accuracy on the ImageNet dataset and avoid training the separate generator for each class. | Sampling to Distill: Knowledge Transfer from Open-World Data | [
"Yuzheng Wang",
"Zhaoyu Chen",
"Jie Zhang",
"Dingkang Yang",
"Zuhao Ge",
"Yang Liu",
"Siao Liu",
"Yunquan Sun",
"Wenqiang Zhang",
"Lizhe Qi"
] | Conference | poster | 2307.16601 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=z9nEV02Ujx | @inproceedings{
zhang2024geometryguided,
title={Geometry-Guided Diffusion Model with Masked Transformer for Robust Multi-View 3D Human Pose Estimation},
author={Xinyi Zhang and Qinpeng Cui and Qiqi Bao and Wenming Yang and Qingmin Liao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=z9nEV02Ujx}
} | Recent research on Diffusion Models and Transformers has brought significant advancements to 3D Human Pose Estimation (HPE). Nonetheless, existing methods often fail to concurrently address the issues of accuracy and generalization. In this paper, we propose a **G**eometry-guided D**if**fusion Model with Masked Trans**former** (Masked Gifformer) for robust multi-view 3D HPE. Within the framework of the diffusion model, a hierarchical multi-view transformer-based denoiser is exploited to fit the 3D pose distribution by systematically integrating joint and view information. To address the long-standing problem of poor generalization, we introduce a fully random mask mechanism without any additional learnable modules or parameters. Furthermore, we incorporate geometric guidance into the diffusion model to enhance the accuracy of the model. This is achieved by optimizing the sampling process to minimize reprojection errors through modeling a conditional guidance distribution. Extensive experiments on two benchmarks demonstrate that Masked Gifformer effectively achieves a trade-off between accuracy and generalization. Specifically, our method outperforms other probabilistic methods by $\textgreater 40\\%$ and achieves comparable results with state-of-the-art deterministic methods. In addition, our method exhibits robustness to varying camera numbers, spatial arrangements, and datasets. | Geometry-Guided Diffusion Model with Masked Transformer for Robust Multi-View 3D Human Pose Estimation | [
"Xinyi Zhang",
"Qinpeng Cui",
"Qiqi Bao",
"Wenming Yang",
"Qingmin Liao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=z8IvMe8gZI | @inproceedings{
cao2024adafpp,
title={Ada{FPP}: Adapt-Focused Bi-Propagating Prototype Learning for Panoramic Activity Recognition},
author={Meiqi Cao and Rui Yan and Xiangbo Shu and Guangzhao Dai and Yazhou Yao and Guosen Xie},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=z8IvMe8gZI}
} | Panoramic Activity Recognition (PAR) aims to identify multi-granul-arity behaviors performed by multiple persons in panoramic scenes, including individual activities, group activities, and global activities. Previous methods 1) heavily rely on manually annotated detection boxes in training and inference, hindering further practical deployment; or 2) directly employ normal detectors to detect multiple persons with varying size and spatial occlusion in panoramic scenes, blocking the performance gain of PAR. To this end, we consider learning a detector adapting varying-size occluded persons, which is optimized along with the recognition module in the all-in-one framework. Therefore, we propose a novel Adapt-Focused bi-Propagating Prototype learning (AdaFPP) framework to jointly recognize individual, group, and global activities in panoramic activity scenes by learning an adapt-focused detector and multi-granularity prototypes as the pretext tasks in an end-to-end way. Specifically, to accommodate the varying sizes and spatial occlusion of multiple persons in crowed panoramic scenes, we introduce a panoramic adapt-focuser, achieving the size-adapting detection of individuals by comprehensively selecting and performing fine-grained detections on object-dense sub-regions identified through original detections. In addition, to mitigate information loss due to inaccurate individual localizations, we introduce a bi-propagation prototyper that promotes closed-loop interaction and informative consistency across different granularities by facilitating bidirectional information propagation among the individual, group, and global levels. Extensive experiments demonstrate the significant performance of AdaFPP and emphasize its powerful applicability for PAR. | AdaFPP: Adapt-Focused Bi-Propagating Prototype Learning for Panoramic Activity Recognition | [
"Meiqi Cao",
"Rui Yan",
"Xiangbo Shu",
"Guangzhao Dai",
"Yazhou Yao",
"Guosen Xie"
] | Conference | poster | 2405.02538 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=z2aWdGeoSL | @inproceedings{
wang2024partially,
title={Partially Aligned Cross-modal Retrieval via Optimal Transport-based Prototype Alignment Learning},
author={Junsheng Wang and Tiantian Gong and Yan Yan},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=z2aWdGeoSL}
} | Supervised cross-modal retrieval (CMR) achieves excellent performance thanks to the semantic information provided by its labels, which helps to establish semantic correlations between samples from different modalities. However, in real-world scenarios, there often exists a large amount of unlabeled and unpaired multimodal training data, rendering existing methods unfeasible. To address this issue, we
propose a novel partially aligned cross-modal retrieval method called Optimal Transport-based Prototype Alignment Learning (OTPAL). Due to the high computational complexity involved in directly establishing matching correlations between unannotated unaligned cross-modal samples, instead, we establish matching correlations between shared prototypes and samples. To be specific, we employ the optimal transport algorithm to establish cross-modal alignment information between samples and prototypes, and then minimize the distance between samples and their corresponding prototypes through a specially designed prototype alignment loss. As an extension of this paper, we also extensively investigate the influence of incomplete multimodal data on cross-modal retrieval performance under the partially aligned setting proposed above. To further address the above more challenging scenario, we raise a scalable prototype-based neighbor feature completion method, which better captures the correlations between incomplete samples and neighbor samples through a cross-modal self-attention mechanism. Experimental results on four benchmark datasets show that our method can obtain satisfactory accuracy and scalability in various real-world scenarios. | Partially Aligned Cross-modal Retrieval via Optimal Transport-based Prototype Alignment Learning | [
"Junsheng Wang",
"Tiantian Gong",
"Yan Yan"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=z1vuuz86iQ | @inproceedings{
pham2024tale,
title={{TALE}: Training-free Cross-domain Image Composition via Adaptive Latent Manipulation and Energy-guided Optimization},
author={Kien T. Pham and Jingye Chen and Qifeng Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=z1vuuz86iQ}
} | We present TALE, a novel training-free framework harnessing the power of text-driven diffusion models to tackle cross-domain image composition task that aims at seamlessly incorporating user-provided objects into a specific visual context regardless of domain disparity. Previous methods often involve either training auxiliary networks or finetuning diffusion models on customized datasets, which are expensive and may undermine the robust textual and visual priors of pretrained diffusion models. Some recent works attempt to break the barrier by proposing training-free workarounds that rely on manipulating attention maps to tame the denoising process implicitly. However, composing via attention maps does not necessarily yield desired compositional outcomes. These approaches could only retain some semantic information and usually fall short in preserving identity characteristics of input objects or exhibit limited background-object style adaptation in generated images. In contrast, TALE is a novel method that operates directly on latent space to provide explicit and effective guidance for the composition process to resolve these problems. Specifically, we equip TALE with two mechanisms dubbed Adaptive Latent Manipulation and Energy-guided Latent Optimization. The former formulates noisy latents conducive to initiating and steering the composition process by directly leveraging background and foreground latents at corresponding timesteps, and the latter exploits designated energy functions to further optimize intermediate latents conforming to specific conditions that complement the former to generate desired final results. Our experiments demonstrate that TALE surpasses prior baselines and attains state-of-the-art performance in image-guided composition across various photorealistic and artistic domains. | TALE: Training-free Cross-domain Image Composition via Adaptive Latent Manipulation and Energy-guided Optimization | [
"Kien T. Pham",
"Jingye Chen",
"Qifeng Chen"
] | Conference | poster | 2408.03637 | [
""
] | https://huggingface.co/papers/2408.03637 | 1 | 0 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=z0OEHZbT71 | @inproceedings{
xiong2024segtalker,
title={SegTalker: Segmentation-based Talking Face Generation with Mask-guided Local Editing},
author={Lingyu Xiong and Xize Cheng and Jintao Tan and Xianjia Wu and Xiandong Li and Lei Zhu and Fei Ma and Minglei Li and Huang Xu and Zhihui Hu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=z0OEHZbT71}
} | Audio-driven talking face generation aims to synthesize video with lip movements synchronized to input audio. However, current generative techniques face challenges in preserving intricate regional textures (skin, teeth). To address the aforementioned challenges, we propose a novel framework called \textbf{SegTalker} to decouple lip movements and image textures by introducing segmentation as intermediate representation. Specifically, given the mask of image employed by a parsing network, we first leverage the speech to drive the mask and generate talking segmentation. Then we disentangle semantic regions of image into style codes using a mask-guided encoder. Ultimately, we inject the previously generated talking segmentation and style codes into a mask-guided StyleGAN to synthesize video frame. In this way, mostly of textures are fully preserved. Moreover, our approach can inherently achieve background separation and facilitate mask-guided facial local editing. In particular, by editing the mask and swapping the region textures from a given reference image (e.g. hair, lip, eyebrows), our approach enables facial editing seamlessly when generating talking face videos. Experiments demonstrate that our proposed approach can effectively preserve texture details and generate temporally consistent video while remaining competitive in lip synchronization. Quantitative results on the HDTF dataset illustrate the superior performance of our method over existing methods on most metrics. | SegTalker: Segmentation-based Talking Face Generation with Mask-guided Local Editing | [
"Lingyu Xiong",
"Xize Cheng",
"Jintao Tan",
"Xianjia Wu",
"Xiandong Li",
"Lei Zhu",
"Fei Ma",
"Minglei Li",
"Huang Xu",
"Zhihui Hu"
] | Conference | poster | 2409.03605 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=yzPsE6aWAk | @inproceedings{
wang2024instantas,
title={Instant{AS}: Minimum Coverage Sampling for Arbitrary-Size Image Generation},
author={Changshuo Wang and Mingzhe Yu and Lei Wu and Lei Meng and Xiang Li and Xiangxu Meng},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=yzPsE6aWAk}
} | In recent years, diffusion models have dominated the field of image generation with their outstanding generation quality. However, pre-trained large-scale diffusion models are generally trained using fixed-size images, and fail to maintain their performance at different aspect ratios. Existing methods for generating arbitrary-size images based on diffusion models face several issues, including the requirement for extensive finetuning or training, sluggish sampling speed, and noticeable edge artifacts. This paper presents the InstantAS method for arbitrary-size image generation. This method performs non-overlapping minimum coverage segmentation on the target image, minimizing the generation of redundant information and significantly improving sampling speed. To maintain the consistency of the generated image, we also proposed the Inter-Domain Distribution Bridging method to integrate the distribution of the entire image and suppress the separation of diffusion paths in different regions of the image. Furthermore, we propose the dynamic semantic guided cross-attention method, allowing for the control of different regions using different semantics. InstantAS can be applied to nearly any existing pre-trained Text-to-Image diffusion model. Experimental results show that InstantAS has better fusion capabilities compared to previous arbitrary-size image generation methods and is far ahead in sampling speed compared to them. | InstantAS: Minimum Coverage Sampling for Arbitrary-Size Image Generation | [
"Changshuo Wang",
"Mingzhe Yu",
"Lei Wu",
"Lei Meng",
"Xiang Li",
"Xiangxu Meng"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=yuCgYQR0A8 | @inproceedings{
chen2024ssl,
title={{SSL}: A Self-similarity Loss for Improving Generative Image Super-resolution},
author={Du Chen and Zhengqiang ZHANG and Jie Liang and Lei Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=yuCgYQR0A8}
} | Generative adversarial networks (GAN) and generative diffusion models (DM) have been widely used in real-world image super-resolution (Real-ISR) to enhance the image perceptual quality. However, these generative models are prone to generating visual artifacts and false image structures, resulting in unnatural Real-ISR results. Based on the fact that natural images exhibit high self-similarities, i.e., a local patch can have many similar patches to it in the whole image, in this work we propose a simple yet effective self-similarity loss (SSL) to improve the performance of generative Real-ISR models, enhancing the hallucination of structural and textural details while reducing the unpleasant visual artifacts. Specifically, we compute a self-similarity graph (SSG) of the ground-truth image, and enforce the SSG of Real-ISR output to be close to it. To reduce the training cost and focus on edge areas, we generate an edge mask from the ground-truth image, and compute the SSG only on the masked pixels. The proposed SSL serves as a general plug-and-play penalty, which could be easily applied to the off-the-shelf Real-ISR models. Our experiments demonstrate that, by coupling with SSL, the performance of many state-of-the-art Real-ISR models, including those GAN and DM based ones, can be largely improved, reproducing more perceptually realistic image details and eliminating many false reconstructions and visual artifacts. Codes and supplementary material are available at https://github.com/ChrisDud0257/SSL | SSL: A Self-similarity Loss for Improving Generative Image Super-resolution | [
"Du Chen",
"Zhengqiang ZHANG",
"Jie Liang",
"Lei Zhang"
] | Conference | poster | 2408.05713 | [
"https://github.com/chrisdud0257/ssl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=ys3V4jiENk | @inproceedings{
zhao2024hawkeye,
title={Hawkeye: Discovering and Grounding Implicit Anomalous Sentiment in Recon-videos via Scene-enhanced Video Large Language Model},
author={Jianing Zhao and Jingjing Wang and Yujie Jin and Jiamin Luo and Guodong Zhou},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=ys3V4jiENk}
} | In real-world recon-videos such as surveillance and drone reconnaissance videos, commonly used explicit language, acoustic and facial expressions information is often missing. However, these videos are always rich in anomalous sentiments (e.g., criminal tendencies), which urgently requires the implicit scene information (e.g., actions and object relations) to fast and precisely identify these anomalous sentiments. Motivated by this, this paper proposes a new chat-paradigm Implicit anomalous sentiment Discovering and grounding (IasDig) task, aiming to interactively, fast discovering and grounding anomalous sentiments in recon-videos via leveraging the implicit scene information (i.e., actions and object relations). Furthermore, this paper believes that this IasDig task faces two key challenges, i.e., scene modeling and scene balancing. To this end, this paper proposes a new Scene-enhanced Video Large Language Model named Hawkeye, i.e., acting like a raptor (e.g., a Hawk) to discover and locate prey, for the IasDig task. Specifically, this approach designs a graph-structured scene modeling module and a balanced heterogeneous MoE module to address the above two challenges, respectively. Extensive experimental results on our constructed scene-sparsity and scene-density IasDig datasets demonstrate the great advantage of Hawkeye to IasDig over the advanced Video-LLM baselines, especially on the metric of false negative rates. This justifies the importance of the scene information for identifying implicit anomalous sentiments and the impressive practicality of Hawkeye for real-world applications. | Hawkeye: Discovering and Grounding Implicit Anomalous Sentiment in Recon-videos via Scene-enhanced Video Large Language Model | [
"Jianing Zhao",
"Jingjing Wang",
"Yujie Jin",
"Jiamin Luo",
"Guodong Zhou"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=yoofksFKk9 | @inproceedings{
gao2024learning,
title={Learning Enriched Features via Selective State Spaces Model for Efficient Image Deblurring},
author={Hu Gao and Bowen Ma and Ying Zhang and Jingfan Yang and Jing Yang and Depeng Dang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=yoofksFKk9}
} | Image deblurring aims to restore a high-quality image from its corresponding blurred. The emergence of CNNs and Transformers has enabled significant progress. However, these methods often face the dilemma between eliminating long-range degradation perturbations and maintaining computational efficiency. While the selective state space model (SSM) shows promise in modeling long-range dependencies with linear complexity, it also encounters challenges such as local pixel forgetting and channel redundancy. To address this issue, we propose an efficient image deblurring network that leverages selective state spaces model to aggregate enriched and accurate features. Specifically, we introduce an aggregate local and global information block (ALGBlock) designed to effectively capture and integrate both local invariant properties and non-local information. The ALGBlock comprises two primary modules: a module for capturing local and global features (CLGF), and a feature aggregation module (FA). The CLGF module is composed of two branches: the global branch captures long-range dependency features via a selective state spaces model, while the local branch employs simplified channel attention to model local connectivity, thereby reducing local pixel forgetting and channel redundancy. In addition, we design a FA module to accentuate the local part by recalibrating the weight during the aggregation of the two branches for restoration. Experimental results demonstrate that the proposed method outperforms state-of-the-art approaches on widely used benchmarks. | Learning Enriched Features via Selective State Spaces Model for Efficient Image Deblurring | [
"Hu Gao",
"Bowen Ma",
"Ying Zhang",
"Jingfan Yang",
"Jing Yang",
"Depeng Dang"
] | Conference | poster | 2403.20106 | [
"https://github.com/Tombs98/ALGNet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=yoPXKdyctQ | @inproceedings{
xu2024tunnel,
title={Tunnel Try-on: Excavating Spatial-temporal Tunnels for High-quality Virtual Try-on in Videos},
author={Zhengze Xu and Mengting Chen and Zhao Wang and Linyu XING and Zhonghua Zhai and Nong Sang and Jinsong Lan and Shuai Xiao and Changxin Gao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=yoPXKdyctQ}
} | Video try-on is challenging and has not been well tackled in previous works. The main obstacle lies in preserving the clothing details and modeling the coherent motions simultaneously. Faced with those difficulties, we address video try-on by proposing a diffusion-based framework named "Tunnel Try-on." The core idea is excavating a ``focus tunnel'' in the input video that gives close-up shots around the clothing regions. We zoom in on the region in the tunnel to better preserve the fine details of the clothing. To generate coherent motions, we leverage the Kalman filter to smooth the tunnel and inject its position embedding into attention layers to improve the continuity of the generated videos. In addition, we develop an environment encoder to extract the context information outside the tunnels. Equipped with these techniques, Tunnel Try-on keeps fine clothing details and synthesizes stable and smooth videos. Demonstrating significant advancements, Tunnel Try-on could be regarded as the first attempt toward the commercial-level application of virtual try-on in videos. The project page is https://mengtingchen.github.io/tunnel-try-on-page/. | Tunnel Try-on: Excavating Spatial-temporal Tunnels for High-quality Virtual Try-on in Videos | [
"Zhengze Xu",
"Mengting Chen",
"Zhao Wang",
"Linyu XING",
"Zhonghua Zhai",
"Nong Sang",
"Jinsong Lan",
"Shuai Xiao",
"Changxin Gao"
] | Conference | poster | 2404.17571 | [
""
] | https://huggingface.co/papers/2404.17571 | 0 | 0 | 0 | 9 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=ymJS86vgzG | @inproceedings{
tan2024multiview,
title={Multi-view X-ray Image Synthesis with Multiple Domain Disentanglement from {CT} Scans},
author={Lixing Tan and shuang Song and Kangneng Zhou and Chengbo Duan and Lanying Wang and Huayang Ren and Linlin Liu and Wei Zhang and Ruoxiu Xiao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=ymJS86vgzG}
} | X-ray images play a vital role in the intraoperative processes due to their high resolution and fast imaging speed and greatly promote the subsequent segmentation, registration and reconstruction. However, over-dosed X-rays superimpose potential risks to human health to some extent. Data-driven algorithms from volume scans to X-ray images are restricted by the scarcity of paired X-ray and volume data. Existing methods are mainly realized by modelling the whole X-ray imaging procedure. In this study, we propose a learning-based approach termed CT2X-GAN to synthesize the X-ray images in an end-to-end manner using the content and style disentanglement from three different image domains. Our method decouples the anatomical structure information from CT scans and style information from unpaired real X-ray images/ digital reconstructed radiography (DRR) images via a series of decoupling encoders. Additionally, we introduce a novel consistency regularization term to improve the stylistic resemblance between synthesized X-ray images and real X-ray images. Meanwhile, we also impose a supervised process by computing the similarity of computed real DRR and synthesized DRR images. We further develop a pose attention module to fully strengthen the comprehensive information in the decoupled content code from CT scans, facilitating high-quality multi-view image synthesis in the lower 2D space. Extensive experiments were conducted on the publicly available CTSpine1K dataset and achieved 97.8350, 0.0842 and 3.0938 in terms of FID, KID and defined user-scored X-ray similarity, respectively. In comparison with 3D-aware methods ($\pi$-GAN, EG3D), CT2X-GAN is superior in improving the synthesis quality and realistic to the real X-ray images. | Multi-view X-ray Image Synthesis with Multiple Domain Disentanglement from CT Scans | [
"Lixing Tan",
"shuang Song",
"Kangneng Zhou",
"Chengbo Duan",
"Lanying Wang",
"Huayang Ren",
"Linlin Liu",
"Wei Zhang",
"Ruoxiu Xiao"
] | Conference | poster | 2404.11889 | [
""
] | https://huggingface.co/papers/2404.11889 | 0 | 0 | 0 | 9 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=yl4lmzP81M | @inproceedings{
wu2024multiple,
title={Multiple Kernel Clustering with Shifted Laplacian on Grassmann Manifold},
author={Xi Wu and Chuang Huang and Xinliu Liu and Fei Zhou and Zhenwen Ren},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=yl4lmzP81M}
} | Multiple kernel clustering (MKC) has garnered considerable attention, as their efficacy in handling nonlinear data in high-dimensional space. However, current MKC methods have three primary issues: (1) Solely focuse on clustering information while neglecting energy information and potential noise interference within the kernel; (2) The inherent manifold structure in the high-dimensional space is complex, and they lack the insufficient exploration of topological structure; (3) Most encounter cubic computational complexity, posing a formidable resource consumption challenge. To tackle the above issues, we propose a novel MKC method with shifted Laplacian on Grassmann manifold (sLGm). Firstly, sLGm constructs $r$-rank shifted Laplacian and subsequently reconstructs it, retaining the clustering-related and energy-related information while reducing the influence of noise. Additionally, sLGm introduces a Grassmann manifold for partition fusion, which can preserve topological information in the high-dimensional space. Notably, an optimal consensus partition can be concurrently learnt from above two procedures, thereby yielding the clustering assignments, and the computational complexity of the whole procedure drops to the quadratic. Conclusively, a comprehensive suite of experiments is executed to roundly prove the effectiveness of sLGm. | Multiple Kernel Clustering with Shifted Laplacian on Grassmann Manifold | [
"Xi Wu",
"Chuang Huang",
"Xinliu Liu",
"Fei Zhou",
"Zhenwen Ren"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=yhKR1rIpWE | @inproceedings{
che2024enhanced,
title={Enhanced Tensorial Self-representation Subspace Learning for Incomplete Multi-view Clustering},
author={hangjun Che and Xinyu Pu and Deqiang Ouyang and Beibei Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=yhKR1rIpWE}
} | Incomplete Multi-View Clustering (IMVC) is a promising topic in multimedia as it breaks the data completeness assumption. Most existing methods solve IMVC from the perspective of graph learning. In contrast, self-representation learning enjoys a superior ability to explore relationships among samples. However, only a few works have explored the potentiality of self-representation learning in IMVC. These self-representation methods infer missing entries from the perspective of whole samples, resulting in redundant information. In addition, designing an effective strategy to retain salient features while eliminating noise is rarely considered in IMVC. To tackle these issues, we propose a novel self-representation learning method with missing sample recovery and enhanced low-rank tensor regularization. Specifically, the missing samples are inferred by leveraging the local structure of each view, which is constructed from available samples at the feature level. Then an enhanced tensor norm, referred to as Logarithm-p norm is devised, which can obtain an accurate cross-view description. Our proposed method achieves exact subspace representation in IMVC by leveraging high-order correlations and inferring missing information at the feature level. Extensive experiments on several widely used multi-view datasets demonstrate the effectiveness of the proposed method. | Enhanced Tensorial Self-representation Subspace Learning for Incomplete Multi-view Clustering | [
"hangjun Che",
"Xinyu Pu",
"Deqiang Ouyang",
"Beibei Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=ybqqGTWuhj | @inproceedings{
wang2024can,
title={Can We Debiase Multimodal Large Language Models via Model Editing?},
author={Zecheng Wang and Xinye Li and Zhanyue Qin and Chunshan Li and Zhiying Tu and Dianhui Chu and Dianbo Sui},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=ybqqGTWuhj}
} | Multimodal large language models (MLLM)
have been observed to exhibit biases originating from their training datasets. Unlike unimodal LLMs, biases in MLLMs may stem from interactions between multiple modalities, which increases the complexity of multimodal debiasing. Conventional approaches like fine-tuning to alleviate biases in models are costly and data-hungry. Model editing methods, which focus on post-hoc modifications of model knowledge, have recently demonstrated significant potential across diverse applications. These methods can effectively and precisely adjust the behavior of models in specific knowledge domains, while minimizing the impact on the overall performance of the model. However, there is currently no comprehensive study to drive the application of model editing methods in debiasing MLLM and to analyze its pros and cons. To facilitate research in this field, we define the debiasing problem of MLLM as an editing problem and propose a novel set of evaluation metrics for MLLM debias editing. Through various experiments, we demonstrate that: (1) Existing model editing methods can effectively alleviate biases in MLLM and can generalize well to semantically equivalent image-text pairs. However, most methods tend to adversely affect the stability of the MLLM. (2) Compared to editing the visual modality of the MLLM, editing the textual modality yields better results in addressing MLLM biases. (3) Model editing based debiasing method can achieve generalization across different types of biases. | Can We Debiase Multimodal Large Language Models via Model Editing? | [
"Zecheng Wang",
"Xinye Li",
"Zhanyue Qin",
"Chunshan Li",
"Zhiying Tu",
"Dianhui Chu",
"Dianbo Sui"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=ya9wqTWe7a | @inproceedings{
li2024glatrack,
title={{GLAT}rack: Global and Local Awareness for Open-Vocabulary Multiple Object Tracking},
author={Guangyao Li and Yajun Jian and Yan Yan and Hanzi Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=ya9wqTWe7a}
} | Open-vocabulary multi-object tracking (MOT) aims to track arbitrary objects encountered in the real world beyond the training set. However, recent methods rely solely on instance-level detection and association of novel objects, which may not consider the valuable fine-grained semantic representations of the targets within key and reference frames. In this paper, we propose a Global and Local Awareness open-vocabulary MOT method (GLATrack), which learns to tackle the task of real-world MOT from both global and instance-level perspectives. Specifically, we introduce a region-aware feature enhancement module to refine global knowledge for complementing local target information, which enhances semantic representation and bridges the distribution gap between the image feature map and the pooled regional features. We propose a bidirectional semantic complementarity strategy to mitigate semantic misalignment arising from missing target information in key frames, which dynamically selects valuable information within reference frames to enrich object representation during the knowledge distillation process. Furthermore, we introduce an appearance richness measurement module to provide appropriate representations for targets with different appearances. The proposed method gains an improvement of 6.9% in TETA and 5.6% in mAP on the large-scale TAO benchmark. | GLATrack: Global and Local Awareness for Open-Vocabulary Multiple Object Tracking | [
"Guangyao Li",
"Yajun Jian",
"Yan Yan",
"Hanzi Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=yYxoCHNDjl | @inproceedings{
qiao2024cartoonnet,
title={CartoonNet: Cartoon Parsing with Semantic Consistency and Structure Correlation},
author={Jian-Jun Qiao and Meng-Yu Duan and Xiao Wu and Yu-Pei Song},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=yYxoCHNDjl}
} | Cartoon parsing is an important task for cartoon-centric applications, which segments the body parts of cartoon images. Due to the complex appearances, abstract drawing styles, and irregular structures of cartoon characters, cartoon parsing remains a challenging task. In this paper, a novel approach, named CartoonNet, is proposed for cartoon parsing, in which semantic consistency and structure correlation are integrated to address the visual diversity and structural complexity for cartoon parsing. A memory-based semantic consistency module is designed to learn the diverse appearances exhibited by cartoon characters. The memory bank stores features of diverse samples and retrieves the samples related to new samples for consistency, which aims to improve the semantic reasoning capability of the network. A self-attention mechanism is employed to conduct consistency learning among diverse body parts belong to the retrieved samples and new samples. To capture the intricate structural information of cartoon images, a structure correlation module is proposed. Leveraging graph attention networks and a main body-aware mechanism, the proposed approach enables structural correlation, allowing it to parse cartoon images with complex structures. Experiments conducted on cartoon parsing and human parsing datasets demonstrate the effectiveness of the proposed method, which outperforms the state-of-the-art approaches for cartoon parsing and achieves competitive performance on human parsing. | CartoonNet: Cartoon Parsing with Semantic Consistency and Structure Correlation | [
"Jian-Jun Qiao",
"Meng-Yu Duan",
"Xiao Wu",
"Yu-Pei Song"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=yUHg63lWGC | @inproceedings{
hao2024addressing,
title={Addressing Imbalance for Class Incremental Learning in Medical Image Classification},
author={Xuze Hao and Wenqian Ni and Xuhao Jiang and Weimin Tan and Bo Yan},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=yUHg63lWGC}
} | Deep convolutional neural networks have made significant breakthroughs in medical image classification, under the assumption that training samples from all classes are simultaneously available. However, in real-world medical scenarios, there's a common need to continuously learn about new diseases, leading to the emerging field of class incremental learning (CIL) in the medical domain. Typically, CIL suffers from catastrophic forgetting when trained on new classes. This phenomenon is mainly caused by the imbalance between old and new classes, and it becomes even more challenging with imbalanced medical datasets. In this work, we introduce two simple yet effective plug-in methods to mitigate the adverse effects of the imbalance. First, we propose a CIL-balanced classification loss to mitigate the classifier bias toward majority classes via logit adjustment. Second, we propose a distribution margin loss that not only alleviates the inter-class overlap in embedding space but also enforces the intra-class compactness. We evaluate the effectiveness of our method with extensive experiments on three benchmark datasets (CCH5000, HAM10000, and EyePACS). The results demonstrate that our approach outperforms state-of-the-art methods. | Addressing Imbalance for Class Incremental Learning in Medical Image Classification | [
"Xuze Hao",
"Wenqian Ni",
"Xuhao Jiang",
"Weimin Tan",
"Bo Yan"
] | Conference | poster | 2407.13768 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=yPu0U9p2z3 | @inproceedings{
guo2024visuallanguage,
title={Visual-Language Collaborative Representation Network for Broad-Domain Few-Shot Image Classification},
author={Qianyu Guo and Jieji Ren and Haofen Wang and Tianxing Wu and Weifeng Ge and Wenqiang Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=yPu0U9p2z3}
} | Visual-language models based on CLIP have shown remarkable abilities in general few-shot image classification. However, their performance drops in specialized fields such as healthcare or agriculture, because CLIP's pre-training does not cover all category data. Existing methods excessively depend on the multi-modal information representation and alignment capabilities acquired from CLIP pre training, which hinders accurate generalization to unfamiliar domains. To address this issue, this paper introduces a novel visual-language collaborative representation network (MCRNet), aiming at acquiring a generalized capability for collaborative fusion and representation of multi-modal information. Specifically, MCRNet learns to generate relational matrices from an information fusion perspective to acquire aligned multi-modal features. This relationship generation strategy is category-agnostic, so it can be generalized to new domains. A class adaptive fine-tuning inference technique is also introduced to help MCRNet efficiently learn alignment knowledge for new categories using limited data. Additionally, the paper establishes a new broad-domain few-shot image classification benchmark containing seven evaluation datasets from five domains. Comparative experiments demonstrate that MCRNet outperforms current state-of-the-art models, achieving an average improvement of 13.06% and 13.73% in the 1-shot and 5-shot settings, highlighting the superior performance and applicability of MCRNet across various domains. | Visual-Language Collaborative Representation Network for Broad-Domain Few-Shot Image Classification | [
"Qianyu Guo",
"Jieji Ren",
"Haofen Wang",
"Tianxing Wu",
"Weifeng Ge",
"Wenqiang Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=yN2zvWf9a0 | @inproceedings{
zeng2024peeling,
title={Peeling Back the Layers: Interpreting the Storytelling of ViT},
author={Jingjie Zeng and Zhihao Yang and Qi Yang and Liang Yang and Hongfei Lin},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=yN2zvWf9a0}
} | By integrating various modules with the Visual Transformer (ViT), we facilitate a interpretation of image processing across each layer and attention head. This method allows us to explore the connections both within and across the layers, enabling a analysis of how images are processed at different layers. Conducting a analysis of the contributions from each layer and attention head, shedding light on the intricate interactions and functionalities within the model's layers. This in-depth exploration not only highlights the visual cues between layers but also examines their capacity to navigate the transition from abstract concepts to tangible objects. It unveils the model's mechanism to building an understanding of images, providing a strategy for adjusting attention heads between layers, thus enabling targeted pruning and enhancement of performance for specific tasks. Our research indicates that achieving a scalable understanding of transformer models is within reach, offering ways for the refinement and enhancement of such models. | Peeling Back the Layers: Interpreting the Storytelling of ViT | [
"Jingjie Zeng",
"Zhihao Yang",
"Qi Yang",
"Liang Yang",
"Hongfei Lin"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=yKzOMrZQBl | @inproceedings{
han2024scene,
title={Scene Diffusion: Text-driven Scene Image Synthesis Conditioning on a Single 3D Model},
author={Xuan Han and Yihao Zhao and Mingyu You},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=yKzOMrZQBl}
} | Scene image is one of the important windows for showcasing product design. To obtain it, the standard 3D-based pipeline requires designer to not only create the 3D model of product, but also manually construct the entire scene in software, which hindering its adaptability in situations requiring rapid evaluation. This study aims to realize a novel conditional synthesis method to create the scene image based on a single-model rendering of the desired object and the scene description. In this task, the major challenges are ensuring the strict appearance fidelity of drawn object and the overall visual harmony of synthesized image. The former's achievement relies on maintaining an appropriate condition-output constraint, while the latter necessitates a well-balanced generation process for all regions of image. In this work, we propose Scene Diffusion framework to meet these challenges. Its first progress is introducing the Shading Adaptive Condition Alignment (SACA), which functions as an intensive training objective to promote the appearance consistency between condition and output image without hindering the network's learning to the global shading coherence. Afterwards, a novel low-to-high Frequency Progression Training Schedule (FPTS) is utilized to maintain the visual harmony of entire image by moderating the growth of high-frequency signals in the object area. Extensive qualitative and quantitative results are presented to support the advantages of the proposed method. In addition, we also demonstrate the broader uses of Scene Diffusion, such as its incorporation with ControlNet. | Scene Diffusion: Text-driven Scene Image Synthesis Conditioning on a Single 3D Model | [
"Xuan Han",
"Yihao Zhao",
"Mingyu You"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=yAygQe3Uxd | @inproceedings{
xu2024highly,
title={Highly Transferable Diffusion-based Unrestricted Adversarial Attack on Pre-trained Vision-Language Models},
author={Wenzhuo Xu and Kai Chen and Ziyi Gao and Zhipeng Wei and Jingjing Chen and Yu-Gang Jiang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=yAygQe3Uxd}
} | Pre-trained Vision-Language Models (VLMs) have shown great ability in various Vision-Language tasks. However, these VLMs exhibit inherent vulnerabilities to transferable adversarial examples, which could potentially undermine their performance and reliability in real-world applications. Cross-modal interactions have been demonstrated to be the key point to boosting adversarial transferability, but the utilization of them is limited in existing multimodal transferable adversarial attacks. Stable Diffusion, which contains multiple cross-attention modules, possesses great potential in facilitating adversarial transferability by leveraging abundant cross-modal interactions. Therefore, We propose a Multimodal Diffusion-based Attack (MDA), which conducts adversarial attacks against VLMs using Stable Diffusion. Specifically, MDA initially generates adversarial text, which is subsequently utilized as guidance to optimize the adversarial image during the diffusion process. Besides leveraging adversarial text in calculating downstream loss to obtain gradients for optimizing image, MDA also takes it as the guiding prompt in adversarial image generation during the denoising process, which enriches the ways of cross-modal interactions, thus strengthening the adversarial transferability. Compared with pixel-based attacks, MDA introduces perturbations in the latent space rather than pixel space to manipulate high-level semantics, which is also beneficial to improving adversarial transferability. Experimental results demonstrate that the adversarial examples generated by MDA are highly transferable across different VLMs on different downstream tasks, surpassing state-of-the-art methods by a large margin. | Highly Transferable Diffusion-based Unrestricted Adversarial Attack on Pre-trained Vision-Language Models | [
"Wenzhuo Xu",
"Kai Chen",
"Ziyi Gao",
"Zhipeng Wei",
"Jingjing Chen",
"Yu-Gang Jiang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=y9J0PNOOrY | @inproceedings{
dai2024expressivesinger,
title={ExpressiveSinger: Multilingual and Multi-Style Score-based Singing Voice Synthesis with Expressive Performance Control},
author={Shuqi Dai and Ming-Yu Liu and Rafael Valle and Siddharth Gururani},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=y9J0PNOOrY}
} | Singing Voice Synthesis (SVS) has significantly advanced with deep generative models, achieving high audio quality but still struggling with musicality, mainly due to the lack of performance control over timing, dynamics, and pitch, which are essential for music expression. Additionally, integrating data and supporting diverse languages and styles in SVS remain challenging. To tackle these issues, this paper presents \textit{ExpressiveSinger}, an SVS framework that leverages a cascade of diffusion models to generate realistic singing across multiple languages, styles, and techniques from scores and lyrics. Our approach begins with consolidating, cleaning, annotating, and processing public singing datasets, developing a multilingual phoneme set, and incorporating different musical styles and techniques. We then design methods for generating expressive performance control signals including phoneme timing, F0 curves, and amplitude envelopes, which enhance musicality and model consistency, introduce more controllability, and reduce data requirements. Finally, we generate mel-spectrograms and audio from performance control signals with style guidance and singer timbre embedding. Our models also enable trained singers to sing in new languages and styles. A listening test reveals both high musicality and audio quality of our generated singing compared with existing works and human singing. We release the data for future research. Demo: We release the data for future research. Demo: https://expressivesinger.github.io/ExpressiveSinger. | ExpressiveSinger: Multilingual and Multi-Style Score-based Singing Voice Synthesis with Expressive Performance Control | [
"Shuqi Dai",
"Ming-Yu Liu",
"Rafael Valle",
"Siddharth Gururani"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=y5R8XVVA03 | @inproceedings{
li2024progressive,
title={Progressive Prototype Evolving for Dual-Forgetting Mitigation in Non-Exemplar Online Continual Learning},
author={Qiwei Li and Yuxin Peng and Jiahuan Zhou},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=y5R8XVVA03}
} | Online Continual Learning (OCL) aims at learning a model through a sequence of single-pass data, usually encountering the challenges of catastrophic forgetting both between different learning stages and within a stage. Currently, existing OCL methods address these issues by replaying part of previous data but inevitably raise data privacy concerns and stand in contrast to the setting of online learning where data can only be accessed once. Moreover, their performance will dramatically drop without any replay buffer. In this paper, we propose a Non-Exemplar Online Continual Learning method named Progressive Prototype Evolving (PPE). The core of our PPE is to progressively learn class-specific prototypes during the online learning phase without reusing any previously seen data. Meanwhile, the progressive prototypes of the current learning stage, serving as the accumulated knowledge of different classes, are fed back to the model to mitigate intra-stage forgetting. Additionally, to resist inter-stage forgetting, we introduce the Prototype Similarity Preserving and Prototype-Guided Gradient Constraint modules which distill and leverage the historical knowledge conveyed by prototypes to regularize the one-way model learning. Consequently, extensive experiments on three widely used datasets demonstrate the superiority of the proposed PPE against the state-of-the-art exemplar-based OCL approaches. Our code will be released. | Progressive Prototype Evolving for Dual-Forgetting Mitigation in Non-Exemplar Online Continual Learning | [
"Qiwei Li",
"Yuxin Peng",
"Jiahuan Zhou"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=y4kdAruRWT | @inproceedings{
wang2024pssdtransformer,
title={{PSSD}-Transformer: Powerful Sparse Spike-Driven Transformer for Image Semantic Segmentation},
author={Hongzhi Wang and Xiubo Liang and Tao Zhang and Gu Yue and Weidong Geng},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=y4kdAruRWT}
} | Spiking Neural Networks (SNNs) have indeed shown remarkable promise in the field of computer vision, emerging as a low-energy alternative to traditional Artificial Neural Networks (ANNs). However, SNNs also face several challenges: \romannumeral1) Existing SNNs are not purely additive and involve a substantial amount of floating-point computations, which contradicts the original design intention of adapting to neuromorphic chips; \romannumeral2) The incorrect positioning of convolutional and pooling layers relative to spiking layers leads to reduced accuracy; \romannumeral3) Leaky Integrate-and-Fire (LIF) neurons have limited capability in representing local information, which is disadvantageous for downstream visual tasks like semantic segmentation. \par
To address the challenges in SNNs, \romannumeral1) we introduce Pure Sparse Self Attention (PSSA) and Dynamic Spiking Membrane Shortcut (DSMS), combining them to tackle the issue of floating-point computations; \romannumeral2) the Spiking Precise Gradient downsampling (SPG-down) method is proposed for accurate gradient transmission; \romannumeral3) the Group-LIF neuron concept is introduced to ensure LIF neurons' capability in representing local information both horizontally and vertically, enhancing their applicability in semantic segmentation tasks. Ultimately, these three solutions are integrated into the Powerful Sparse-Spike-Driven Transformer (PSSD-Transformer), effectively handling semantic segmentation tasks and addressing the challenges inherent in Spiking Neural Networks. The experimental results demonstrate that our model outperforms previous results on standard classification datasets and also shows commendable performance on semantic segmentation datasets. The code will be made publicly available after the paper is accepted for publication. | PSSD-Transformer: Powerful Sparse Spike-Driven Transformer for Image Semantic Segmentation | [
"Hongzhi Wang",
"Xiubo Liang",
"Tao Zhang",
"Gu Yue",
"Weidong Geng"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=xy5GYziag0 | @inproceedings{
zhou2024rethinking,
title={Rethinking Impersonation and Dodging Attacks on Face Recognition Systems},
author={Fengfan Zhou and Qianyu Zhou and Bangjie Yin and Hui Zheng and Xuequan Lu and Lizhuang Ma and Hefei Ling},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=xy5GYziag0}
} | Face Recognition (FR) systems can be easily deceived by adversarial examples that manipulate benign face images through imperceptible perturbations. Adversarial attacks on FR encompass two types: impersonation (targeted) attacks and dodging (untargeted) attacks. Previous methods often achieve a successful impersonation attack on FR; However, it does not necessarily guarantee a successful dodging attack on FR in the black-box setting. In this paper, our key insight is that the generation of adversarial examples should perform both impersonation and dodging attacks simultaneously. To this end, we propose a novel attack method termed as Adversarial Pruning (Adv-Pruning), to fine-tune existing adversarial examples to enhance their dodging capabilities while preserving their impersonation capabilities. Adv-Pruning consists of Priming, Pruning, and Restoration stages. Concretely, we propose Adversarial Priority Quantification to measure the region-wise priority of original adversarial perturbations, identifying and releasing those with minimal impact on absolute model output variances. Then, Biased Gradient Adaptation is presented to adapt the adversarial examples to traverse the decision boundaries of both the attacker and victim by adding perturbations favoring dodging attacks on the vacated regions, preserving the prioritized features of the original perturbations while boosting dodging performance. As a result, we can maintain the impersonation capabilities of original adversarial examples while effectively enhancing dodging capabilities. Comprehensive experiments demonstrate the superiority of our method compared with state-of-the-art adversarial attacks. | Rethinking Impersonation and Dodging Attacks on Face Recognition Systems | [
"Fengfan Zhou",
"Qianyu Zhou",
"Bangjie Yin",
"Hui Zheng",
"Xuequan Lu",
"Lizhuang Ma",
"Hefei Ling"
] | Conference | poster | 2401.08903 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=xuyHQHwlxm | @inproceedings{
chen2024sdepr,
title={{SD}e{PR}: Fine-Grained Leaf Image Retrieval with Structural Deep Patch Representation},
author={Xin Chen and Bin Wang and jinzheng jiang and Kunkun Zhang and Yongsheng Gao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=xuyHQHwlxm}
} | Fine-grained leaf image retrieval (FGLIR) is a new unsupervised pattern recognition task in content-based image retrieval (CBIR). It aims to distinguish varieties/cultivars of leaf images within a certain plant species and is more challenging than general leaf image retrieval task due to the inherently subtle differences across different cultivars. In this study, we for the first time investigate the possible way to mine the spatial structure and contextual information from the activation of the convolutional layers of CNN networks for FGLIR. For achieving this goal, we design a novel geometrical structure, named Triplet Patch-Pairs Composite Structure (TPCS), consisting of three symmetric patch pairs segmented from the leaf images in different orientations. We extract CNN feature map for each patch in TPCS and measure the difference between the feature maps of the patch pair for constructing local deep self-similarity descriptor. By varying the size of the TPCS, we can yield multi-scale deep self-similarity descriptors. The final aggregated local deep self-similarity descriptors, named Structural Deep Patch Representation (SDePR), not only encode the spatial structure and contextual information of leaf images in deep feature domain, but also are invariant to the geometrical transformations. The extensive experiments of applying our SDEPR method to the public challenging FGLIR tasks show that our method outperforms the state-of-the-art handcrafted visual features and deep retrieval models. | SDePR: Fine-Grained Leaf Image Retrieval with Structural Deep Patch Representation | [
"Xin Chen",
"Bin Wang",
"jinzheng jiang",
"Kunkun Zhang",
"Yongsheng Gao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=xutEXJnT1R | @inproceedings{
kuang2024learning,
title={Learning Context with Priors for 3D Interacting Hand-Object Pose Estimation},
author={Zengsheng Kuang and Changxing Ding and Huan Yao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=xutEXJnT1R}
} | Achieving 3D hand-object pose estimation in interaction scenarios is challenging due to the severe occlusion generated during the interaction. Existing methods address this issue by utilizing the correlation between the hand and object poses as additional cues. They usually first extract the hand and object features from their respective regions and then refine them with each other. However, this paradigm disregards the role of a broad range of image context. To address this problem, we propose a novel and robust approach that learns a broad range of context by imposing priors. First, we build this approach using stacked transformer decoder layers. These layers are required for extracting image-wide context and regional hand or object features by constraining cross-attention operations. We share the context decoder layer parameters between the hand and object pose estimations to avoid interference in the context-learning process. This imposes a prior, indicating that the hand and object are mutually the most important context for each other, significantly enhancing the robustness of obtained context features. Second, since they play different roles, we provide customized feature maps for the context, hand, and object decoder layers. This strategy facilitates the disentanglement of these layers, reducing the feature learning complexity. Finally, we conduct extensive experiments on the popular HO3D and Dex-YCB databases. The experimental results indicate that our method significantly outperforms state-of-the-art approaches and can be applied to other hand pose estimation tasks. The code will be released. | Learning Context with Priors for 3D Interacting Hand-Object Pose Estimation | [
"Zengsheng Kuang",
"Changxing Ding",
"Huan Yao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=xtnDvFtng3 | @inproceedings{
ying2024dig,
title={{DIG}: Complex Layout Document Image Generation with Authentic-looking Text for Enhancing Layout Analysis},
author={Dehao Ying and Fengchang Yu and Haihua Chen and Wei Lu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=xtnDvFtng3}
} | Even though significant progress has been made in standardizing document layout analysis, complex layout documents like magazines, newspapers, and posters still present challenges. Models trained on standardized documents struggle with these complexities, and the high cost of annotating such documents limits dataset availability. To address this, we propose the Complex Layout Document Image Generation (DIG) model, which can generate diverse document images with complex layouts and authentic-looking text, aiding in layout analysis model training. Concretely, we first pretrain DIG on a large-scale document dataset with a text-sensitive loss function to address the issue of unreal generation of text regions. Then, we fine-tune it with a small number of documents with complex layouts to generate new images with the same layout. Additionally, we use a layout generation model to create new layouts, enhancing data diversity. Finally, we design a box-wise quality scoring function to filter out low-quality regions during layout analysis model training to enhance the effectiveness of using the generated images. Experimental results on the DSSE-200 and PRImA datasets show when incorporating generated images from DIG, the mAP of the layout analysis model is improved from 47.05 to 56.07 and from 53.80 to 62.26, respectively, which is a 19.17% and 15.72% enhancement compared to the baseline. | DIG: Complex Layout Document Image Generation with Authentic-looking Text for Enhancing Layout Analysis | [
"Dehao Ying",
"Fengchang Yu",
"Haihua Chen",
"Wei Lu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=xsx39qJaLq | @inproceedings{
lu2024facialflownet,
title={FacialFlowNet: Advancing Facial Optical Flow Estimation with a Diverse Dataset and a Decomposed Model},
author={Jianzhi Lu and Ruian He and Shili Zhou and Weimin Tan and Bo Yan},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=xsx39qJaLq}
} | Facial movements play a crucial role in conveying altitude and intentions, and facial optical flow provides a dynamic and detailed representation of it. However, the scarcity of datasets and a modern baseline hinders the progress in facial optical flow research. This paper proposes FacialFlowNet (FFN), a novel large-scale facial optical flow dataset, and the Decomposed Facial Flow Model (DecFlow), the first method capable of decomposing facial flow. FFN comprises 9,635 identities and 105,970 image pairs, offering unprecedented diversity for detailed facial and head motion analysis. DecFlow features a facial semantic-aware encoder and a decomposed flow decoder, excelling in accurately estimating and decomposing facial flow into head and expression components. Comprehensive experiments demonstrate that FFN significantly enhances the accuracy of facial flow estimation across various optical flow methods, achieving up to an 11% reduction in Endpoint Error (EPE) (from 3.91 to 3.48). Moreover, DecFlow, when coupled with FFN, outperforms existing methods in both synthetic and real-world scenarios, enhancing facial expression analysis. The decomposed expression flow achieves a substantial accuracy improvement of 18% (from 69.1% to 82.1%) in micro expressions recognition. These contributions represent a significant advancement in facial motion analysis and optical flow estimation. Codes and datasets will be available to the public. | FacialFlowNet: Advancing Facial Optical Flow Estimation with a Diverse Dataset and a Decomposed Model | [
"Jianzhi Lu",
"Ruian He",
"Shili Zhou",
"Weimin Tan",
"Bo Yan"
] | Conference | poster | 2409.05396 | [
"https://github.com/ria1159/facialflownet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=xoDIqKbTS7 | @inproceedings{
liu2024semanticaware,
title={Semantic-aware Representation Learning for Homography Estimation},
author={Yuhan Liu and Qianxin Huang and Siqi Hui and Jingwen Fu and Sanping Zhou and Kangyi Wu and Pengna Li and Jinjun Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=xoDIqKbTS7}
} | Homography estimation is the task of determining the transformation from an image pair. Our approach focuses on employing detector-free feature matching methods to address this issue. Previous work has underscored the importance of incorporating semantic information, however there still lacks an efficient way to utilize semantic information. Previous methods suffer from treating the semantics as a pre-processing, causing the utilization of semantics overly coarse-grained and lack adaptability when dealing with different tasks. In our work, we seek another way to use the semantic information, that is semantic-aware feature representation learning framework. Based on this, we propose SRMatcher, a new detector-free feature matching method, which encourages the network to learn integrated semantic feature representation. Specifically, to capture precise and rich semantics, we leverage the capabilities of recently popularized vision foundation models (VFMs) trained on extensive datasets. Then, a cross-image Semantic-aware Fusion Block (SFB) is proposed to integrate its fine-grained semantic features into the feature representation space. In this way, by reducing errors stemming from semantic inconsistencies in matching pairs, our proposed SRMatcher is able to deliver more accurate and realistic outcomes. Extensive experiments show that SRMatcher surpasses solid baselines and attains SOTA results on multiple real-world datasets. Compared to the previous SOTA approach GeoFormer, SRMatcher increases the area under the cumulative curve (AUC) by about 11% on HPatches. Additionally, the SRMatcher could serve as a plug-and-play framework for other matching methods like LoFTR, yielding substantial precision improvement. | Semantic-aware Representation Learning for Homography Estimation | [
"Yuhan Liu",
"Qianxin Huang",
"Siqi Hui",
"Jingwen Fu",
"Sanping Zhou",
"Kangyi Wu",
"Pengna Li",
"Jinjun Wang"
] | Conference | poster | 2407.13284 | [
"https://github.com/lyh200095/srmatcher"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=xnpEpIqjUU | @inproceedings{
hong2024consplan,
title={Cons2Plan: Vector Floorplan Generation from Various Conditions via a Learning Framework based on Conditional Diffusion Models},
author={Shibo Hong and Xuhong Zhang and Tianyu Du and Sheng Cheng and Xun Wang and Jianwei Yin},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=xnpEpIqjUU}
} | The field of floorplan generation has attracted significant interest from the community. Remarkably, recent progress in methods based on generative models has substantially promoted the development of floorplan generation. However, generating floorplans that satisfy various conditions remains a challenging task. This paper proposes a learning framework, named Cons2Plan, for automatically and high-quality generating vector floorplans from various conditions. The input conditions can be graphs, boundaries, or a combination of both. The conditional diffusion model is the core component of our Cons2Plan. The denoising network uses a conditional embedding module to incorporate the conditions as guidance during the reverse process. Additionally, Cons2Plan incorporates a two-stage approach that generates graph conditions based on boundaries. It utilizes three regression models for node prediction and a novel conditional edge generation diffusion model, named CEDM, for edge generation. We conduct qualitative evaluations, quantitative comparisons, and ablation studies to demonstrate that our method can produce higher-quality floorplans than those generated by state-of-the-art methods. | Cons2Plan: Vector Floorplan Generation from Various Conditions via a Learning Framework based on Conditional Diffusion Models | [
"Shibo Hong",
"Xuhong Zhang",
"Tianyu Du",
"Sheng Cheng",
"Xun Wang",
"Jianwei Yin"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=xm7OAfcf3U | @inproceedings{
jiang2024remonet,
title={{RE}moNet: Reducing Emotional Label Noise via Multi-regularized Self-supervision},
author={Weibang Jiang and Yu-Ting Lan and Bao-liang Lu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=xm7OAfcf3U}
} | Emotion recognition based on electroencephalogram (EEG) has garnered increasing attention in recent years due to the non-invasiveness and high reliability of EEG measurements. Despite the promising performance achieved by numerous existing methods, several challenges persist. Firstly, there is the challenge of emotional label noise, stemming from the assumption that emotions remain consistently evoked and stable throughout the entirety of video observation. Such an assumption proves difficult to uphold in practical experimental settings, leading to discrepancies between EEG signals and anticipated emotional states. In addition, there's the need for comprehensive capture of the temporal-spatial-spectral characteristics of EEG signals and cope with low signal-to-noise ratio (SNR) issues. To tackle these challenges, we propose a comprehensive pipeline named REmoNet, which leverages novel self-supervised techniques and multi-regularized co-learning. Two self-supervised methods, including masked channel modeling via temporal-spectral transformation and emotion contrastive learning, are introduced to facilitate the comprehensive understanding and extraction of emotion-relevant EEG representations during pre-training. Additionally, fine-tuning with multi-regularized co-learning exploits feature-dependent information through intrinsic similarity, resulting in mitigating emotional label noise. Experimental evaluations on two public datasets demonstrate that our proposed approach, REmoNet, surpasses existing state-of-the-art methods, showcasing its effectiveness in simultaneously addressing raw EEG signals and noisy emotional labels. | REmoNet: Reducing Emotional Label Noise via Multi-regularized Self-supervision | [
"Weibang Jiang",
"Yu-Ting Lan",
"Bao-liang Lu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=xeSwPazJbs | @inproceedings{
lin2024exploring,
title={Exploring Matching Rates: From Key Point Selection to Camera Relocalization},
author={Hu Lin and Chengjiang Long and Yifeng Fei and qianchen xia and Erwei Yin and Baocai Yin and Xin Yang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=xeSwPazJbs}
} | Camera relocalization is the task of estimating camera pose within a known scene. It has important applications in the fields of Virtual Reality (VR), Augmented Reality (AR), robotics, and more within the domain of computer vision. Learning-based camera relocalizers have demonstrated leading pose accuracy, yet all current methods invariably utilize all the information within an image for pose estimation. This may offer robustness under challenging viewpoints but impacts the localization accuracy for viewpoints that are easier to localize. In this paper, we propose a method to gauge the credibility of image pose, enabling our approach to achieve more accurate localization on keyframes. Additionally, we have devised a keypoint selection method predicated on matching rate. Furthermore, we have developed a keypoint evaluation technique based on reprojection error, which estimates the scene coordinates for points within the scene that truly warrant attention, thereby enhancing the localization performance for keyframes. We also introduce a gated camera pose estimation strategy, employing an updated keypoint-based network for keyframes with higher credibility and a more robust network for difficult viewpoints. By adopting an effective curriculum learning scheme, we have achieved higher accuracy within a training span of just 20 minutes. Our method's superior performance is validated through rigorous experimentation. The code will be released. | Exploring Matching Rates: From Key Point Selection to Camera Relocalization | [
"Hu Lin",
"Chengjiang Long",
"Yifeng Fei",
"qianchen xia",
"Erwei Yin",
"Baocai Yin",
"Xin Yang"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=xcegM22L6D | @inproceedings{
chen2024finegrained,
title={Fine-Grained Side Information Guided Dual-Prompts for Zero-Shot Skeleton Action Recognition},
author={Yang Chen and Jingcai Guo and Tian He and Xiaocheng Lu and Ling Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=xcegM22L6D}
} | Skeleton-based zero-shot action recognition aims to recognize unknown human actions based on the learned priors of the known skeleton-based actions and a semantic descriptor space shared by both known and unknown categories. However, previous works focus on establishing the bridges between the known skeleton representations space and semantic descriptions space at the coarse-grained level for recognizing unknown action categories, ignoring the fine-grained alignment of these two spaces, resulting in suboptimal performance in distinguishing high-similarity action categories. To address these challenges, we propose a novel method via Side information and dual-prompTs learning for skeleton-based zero-shot Action Recognition (STAR) at the fine-grained level. Specifically, 1) we decompose the skeleton into several parts based on its topology structure and introduce the side information concerning multi-part descriptions of human body movements for alignment between the skeleton and the semantic space at the fine-grained level; 2) we design the visual-attribute and semantic-part prompts to improve the intra-class compactness within the skeleton space and inter-class separability within the semantic space, respectively, to distinguish the high-similarity actions. Extensive experiments show that our method achieves state-of-the-art performance in ZSL and GZSL settings on NTU RGB+D, NTU RGB+D 120 and PKU-MMD datasets. The code will be available in the future. | Fine-Grained Side Information Guided Dual-Prompts for Zero-Shot Skeleton Action Recognition | [
"Yang Chen",
"Jingcai Guo",
"Tian He",
"Xiaocheng Lu",
"Ling Wang"
] | Conference | poster | 2404.07487 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=xcWifOV1VX | @inproceedings{
hui2024scsnet,
title={S\${\textasciicircum}2\$-{CSN}et: Scale-Aware Scalable Sampling Network for Image Compressive Sensing},
author={Chen Hui and Haiqi Zhu and Shuya Yan and Shaohui Liu and Feng Jiang and Debin Zhao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=xcWifOV1VX}
} | Deep network-based image Compressive Sensing (CS) has attracted much attention in recent years. However, there still exist the following two issues: 1) Existing methods typically use fixed-scale sampling, which leads to limited insights into the image content. 2) Most pre-trained models can only handle fixed sampling rates and fixed block scales, which restricts the scalability of the model. In this paper, we propose a novel scale-aware scalable CS network (dubbed S$^2$-CSNet), which achieves scale-aware adaptive sampling, fine granular scalability and high-quality reconstruction with one single model. Specifically, to enhance the scalability of the model, a structural sampling matrix with a predefined order is first designed, which is a universal sampling matrix that can sample multi-scale image blocks with arbitrary sampling rates. Then, based on the universal sampling matrix, a distortion-guided scale-aware scheme is presented to achieve scale-variable adaptive sampling, which predicts the reconstruction distortion at different sampling scales from the measurements and select the optimal division scale for sampling. Furthermore, a multi-scale hierarchical sub-network under a well-defined compact framework is put forward to reconstruct the image. In the multi-scale feature domain of the sub-network, a dual spatial attention is developed to explore the local and global affinities between dense feature representations for deep fusion. Extensive experiments manifest that the proposed S$^2$-CSNet outperforms existing state-of-the-art CS methods. | S^2-CSNet: Scale-Aware Scalable Sampling Network for Image Compressive Sensing | [
"Chen Hui",
"Haiqi Zhu",
"Shuya Yan",
"Shaohui Liu",
"Feng Jiang",
"Debin Zhao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=xX66hwZJWa | @inproceedings{
zhang2024fusionocc,
title={FusionOcc: Multi-Modal Fusion for 3D Occupancy Prediction},
author={Shuo Zhang and Yupeng Zhai and Jilin Mei and Yu Hu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=xX66hwZJWa}
} | 3D occupancy prediction (OCC) aims to estimate and predict the semantic occupancy state of the surrounding environment, which is crucial for scene understanding and reconstruction in the real world. However, existing methods for 3D OCC mainly rely on surround-view camera images, whose performance is still insufficient in some challenging scenarios, such as low-light conditions. To this end, we propose a new multi-modal fusion network for 3D occupancy prediction by fusing features of LiDAR point clouds and surround-view images, called FusionOcc. Our model fuses features of these two modals in 2D and 3D space, respectively. By integrating the depth information from point clouds, a cross-modal fusion module is designed to predict a 2D dense depth map, enabling an accurate depth estimation and a better transition of 2D image features into 3D space. In addition, features of voxelized point clouds are aligned and merged with image features converted by a view-transformer in 3D space. Experiments show that FusionOcc establishes the new state of the art on Occ3D-nuScenes dataset, achieving a mIoU score of 35.94% (without visibility mask) and 56.62% (with visibility mask), showing an average improvement of 3.42% compared to the best previous method. Our work provides a new baseline for further research in multi-modal fusion for 3D occupancy prediction. | FusionOcc: Multi-Modal Fusion for 3D Occupancy Prediction | [
"Shuo Zhang",
"Yupeng Zhai",
"Jilin Mei",
"Yu Hu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=xUtNrKH8iB | @inproceedings{
wang2024nft,
title={{NFT}1000: A Cross-Modal Dataset For Non-Fungible Token Retrieval},
author={Shuxun Wang and Yunfei Lei and Ziqi Zhang and Wei Liu and Haowei Liu and Li Yang and Bing Li and Wenjuan Li and Jin Gao and Weiming Hu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=xUtNrKH8iB}
} | With the rise of "Metaverse" and "Web 3.0", Non-Fungible Tokens (NFTs) have emerged as a kind of pivotal digital asset, garnering significant attention. By the end of March 2024, more than 1.7 billion NFTs have been minted across various blockchain platforms. To effectively locate a desired NFT token, conducting searches within the huge amount NFTs is essential. The challenge in NFT retrieval is heightened due to the high degree of similarity among different NFTs regarding regional and semantic aspects. In this paper, we introduce a dataset named “NFT Top1000 Visual-Text Dataset”(NFT1000), containing 7.56 million image-text pairs, and being collected from 1000 most famous PFP NFT collections by sales volume on the Ethereum blockchain. Based on this dataset, building upon the foundation of the CLIP series of pre-trained models, we propose a dynamic masking fine-grained contrastive learning fine-tuning approach, which enables us to fine-tune a more performant model using only 13% of the total training data (0.79 million v.s. 6.1 million), resulting in a 7.2% improvement in the top-1 accuracy rate. We also propose a robust metric Comprehensive Variance Index (CVI) to assess the similarity and retrieval difficulty of visual-text pairs data. Please try our retrieval demo at https://876p9s4054.vicp.fun/ | NFT1000: A Cross-Modal Dataset For Non-Fungible Token Retrieval | [
"Shuxun Wang",
"Yunfei Lei",
"Ziqi Zhang",
"Wei Liu",
"Haowei Liu",
"Li Yang",
"Bing Li",
"Wenjuan Li",
"Jin Gao",
"Weiming Hu"
] | Conference | poster | 2402.16872 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=xO2rRddRvT | @inproceedings{
pan2024towards,
title={Towards Small Object Editing: A Benchmark Dataset and A Training-Free Approach},
author={Qihe Pan and Zhen Zhao and Zicheng Wang and Sifan Long and Yiming Wu and Wei Ji and Haoran Liang and Ronghua Liang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=xO2rRddRvT}
} | A plethora of text-guided image editing methods has recently been developed by leveraging the impressive capabilities of large-scale diffusion-based generative models especially Stable Diffusion. Despite the success of diffusion models in producing high-quality images, their application to small object generation has been limited due to difficulties in aligning cross-modal attention maps between text and these objects. Our approach offers a training-free method that significantly mitigates this alignment issue with local and global attention guidance , enhancing the model's ability to accurately render small objects in accordance with textual descriptions. We detail the methodology in our approach, emphasizing its divergence from traditional generation techniques and highlighting its advantages. What's more important is that we also provide~\textit{SOEBench} (Small Object Editing), a standardized benchmark for quantitatively evaluating text-based small object generation collected from \textit{MSCOCO}\cite{lin2014microsoft} and \textit{OpenImage}\cite{kuznetsova2020open}. Preliminary results demonstrate the effectiveness of our method, showing marked improvements in the fidelity and accuracy of small object generation compared to existing models. This advancement not only contributes to the field of AI and computer vision but also opens up new possibilities for applications in various industries where precise image generation is critical. We will release our dataset on our project page: \href{https://soebench.github.io/} | Towards Small Object Editing: A Benchmark Dataset and A Training-Free Approach | [
"Qihe Pan",
"Zhen Zhao",
"Zicheng Wang",
"Sifan Long",
"Yiming Wu",
"Wei Ji",
"Haoran Liang",
"Ronghua Liang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=xJK53lGJP2 | @inproceedings{
mao2024mdtag,
title={{MDT}-A2G: Exploring Masked Diffusion Transformers for Co-Speech Gesture Generation},
author={Xiaofeng Mao and Zhengkai Jiang and Qilin Wang and Chencan Fu and Jiangning Zhang and Jiafu Wu and Yabiao Wang and Chengjie Wang and Wei Li and Mingmin Chi},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=xJK53lGJP2}
} | Recent advancements in the field of Diffusion Transformers have substantially improved the generation of high-quality 2D images, 3D videos, and 3D shapes. However, the effectiveness of the Transformer architecture in the domain of co-speech gesture generation remains relatively unexplored, as prior methodologies have predominantly employed the Convolutional Neural Network (CNNs) or simple a few transformer layers. In an attempt to bridge this research gap, we introduce a novel Masked Diffusion Transformer for co-speech gesture generation, referred to as MDT-A2G, which directly implements the denoising process on gesture sequences. To enhance the contextual reasoning capability of temporally aligned speech-driven gestures, we incorporate a novel Masked Diffusion Transformer. This model employs a mask modeling scheme specifically designed to strengthen temporal relation learning among sequence gestures, thereby expediting the learning process and leading to coherent and realistic motions. Apart from audio, Our MDT-A2G model also integrates multi-modal information, encompassing text, emotion, and identity. Furthermore, we propose an efficient inference strategy that diminishes the denoising computation by leveraging previously calculated results, thereby achieving a speedup with negligible performance degradation. Experimental results demonstrate that MDT-A2G excels in gesture generation, boasting a learning speed that is over 6$\times$ faster than traditional diffusion transformers and an inference speed that is 5.7$\times$ than the standard diffusion model. | MDT-A2G: Exploring Masked Diffusion Transformers for Co-Speech Gesture Generation | [
"Xiaofeng Mao",
"Zhengkai Jiang",
"Qilin Wang",
"Chencan Fu",
"Jiangning Zhang",
"Jiafu Wu",
"Yabiao Wang",
"Chengjie Wang",
"Wei Li",
"Mingmin Chi"
] | Conference | poster | 2408.03312 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=xBFXJVYfal | @inproceedings{
zeng2024focus,
title={Focus, Distinguish, and Prompt: Unleashing {CLIP} for Efficient and Flexible Scene Text Retrieval},
author={Gangyan Zeng and Yuan Zhang and Jin Wei and Dongbao Yang and peng zhang and Yiwen Gao and Xugong Qin and Yu Zhou},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=xBFXJVYfal}
} | Scene text retrieval aims to find all images containing the query text from an image gallery. Current efforts tend to adopt an Optical Character Recognition (OCR) pipeline, which requires complicated text detection and/or recognition processes, resulting in inefficient and inflexible retrieval. Different from them, in this work we propose to explore the intrinsic potential of Contrastive Language-Image Pre-training (CLIP) for OCR-free scene text retrieval. Through empirical analysis, we observe that the main challenges of CLIP as a text retriever are: 1) limited text perceptual scale, and 2) entangled visual-semantic concepts. To this end, a novel model termed FDP (Focus, Distinguish, and Prompt) is developed. FDP first focuses on scene text via shifting the attention to text area and probing the hidden text knowledge, and then divides the query text into content word and function word for processing, in which a semantic-aware prompting scheme and a distracted queries assistance module are utilized. Extensive experiments show that FDP significantly enhances the inference speed while achieving better or competitive retrieval accuracy. Notably, on the IIIT-STR benchmark, FDP surpasses the state-of-the-art method by 4.37% with a 4 times faster speed. Furthermore, additional experiments under phrase-level and attribute-aware scene text retrieval settings validate FDP's particular advantages in handling diverse forms of query text. | Focus, Distinguish, and Prompt: Unleashing CLIP for Efficient and Flexible Scene Text Retrieval | [
"Gangyan Zeng",
"Yuan Zhang",
"Jin Wei",
"Dongbao Yang",
"peng zhang",
"Yiwen Gao",
"Xugong Qin",
"Yu Zhou"
] | Conference | poster | 2408.00441 | [
"https://github.com/gyann-z/fdp"
] | https://huggingface.co/papers/2408.00441 | 0 | 0 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=x89a3eRwiO | @inproceedings{
li2024masked,
title={Masked Random Noise for Communication-Efficient Federated Learning},
author={Shiwei Li and Yingyi Cheng and Haozhao Wang and Xing Tang and Shijie Xu and weihongluo and Yuhua Li and Dugang Liu and xiuqiang He and Ruixuan Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=x89a3eRwiO}
} | Federated learning is a promising distributed machine learning paradigm that can effectively protect data privacy. However, it may involve significant communication overhead, thereby potentially impairing training efficiency. In this paper, we aim to enhance communication efficiency from a new perspective. Specifically, we request the distributed clients to find optimal model updates relative to global model parameters within predefined random noise. For this purpose, we propose **Federated Masked Random Noise (FedMRN)**, a novel framework that enables clients to learn a 1-bit mask for each model parameter and apply masked random noise (i.e., the Hadamard product of random noise and masks) to represent model updates. To make FedMRN feasible, we propose an advanced mask training strategy, called progressive stochastic masking (*PSM*). After local training, clients only transmit local masks and a random seed to the server. Additionally, we provide theoretical guarantees for the convergence of FedMRN under both strongly convex and non-convex assumptions. Extensive experiments are conducted on four popular datasets. The results show that FedMRN exhibits superior convergence speed and test accuracy compared to relevant baselines, while attaining a similar level of accuracy as FedAvg. | Masked Random Noise for Communication-Efficient Federated Learning | [
"Shiwei Li",
"Yingyi Cheng",
"Haozhao Wang",
"Xing Tang",
"Shijie Xu",
"weihongluo",
"Yuhua Li",
"Dugang Liu",
"xiuqiang He",
"Ruixuan Li"
] | Conference | poster | [
"https://github.com/Leopold1423/fedmrn-mm24"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=x7NIbrZ42w | @inproceedings{
wang2024enhancing,
title={Enhancing Pre-trained ViTs for Downstream Task Adaptation: A Locality-Aware Prompt Learning Method},
author={Shaokun Wang and Yifan Yu and Yuhang He and Yihong Gong},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=x7NIbrZ42w}
} | Vision Transformers (ViTs) excel in extracting global information from image patches. However, their inherent limitation lies in effectively extracting information within local regions, hindering their applicability and performance. Particularly, fully supervised pre-trained ViTs, such as Vanilla ViT and CLIP, face the challenge of locality vanishing when adapting to downstream tasks. To address this, we introduce a novel LOcality-aware pRompt lEarning (LORE) method, aiming to improve the adaptation of pre-trained ViTs to downstream tasks. LORE integrates a data-driven Black Box module (i.e.,a pre-trained ViT encoder) with a knowledge-driven White Box module. The White Box module is a locality-aware prompt learning mechanism to compensate for ViTs’ deficiency in incorporating local information. More specifically, it begins with the design of a Locality Interaction Network (LIN), which treats an image as a neighbor graph and employs graph convolution operations to enhance local relationships among image patches. Subsequently, a Knowledge-Locality Attention (KLA) mechanism is proposed to capture critical local regions from images, learning Knowledge-Locality (K-L) prototypes utilizing relevant semantic knowledge. Afterwards, K-L prototypes guide the training of a Prompt Generator (PG) to generate locality-aware prompts for images. The locality-aware prompts, aggregating crucial local information, serve as additional input for our Black Box module. Combining pre-trained ViTs with
our locality-aware prompt learning mechanism, our Black-White Box model enables the capture of both global and local information, facilitating effective downstream task adaptation. Experimental evaluations across four downstream tasks demonstrate the effectiveness and superiority of our LORE. | Enhancing Pre-trained ViTs for Downstream Task Adaptation: A Locality-Aware Prompt Learning Method | [
"Shaokun Wang",
"Yifan Yu",
"Yuhang He",
"Yihong Gong"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=x79oczoigv | @inproceedings{
su2024sample,
title={Sample Efficiency Matters: Training Multimodal Conversational Recommendation Systems in a Small Data Setting},
author={Haoyang Su and Wenzhe Du and Nguyen Cam-Tu and Wang Xiaoliang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=x79oczoigv}
} | With the increasing prevalence of virtual assistants, multimodal conversational recommendation systems (multimodal CRS) becomes essential for boosting customer engagement, improving conversion rates, and enhancing user satisfaction. Yet conversational samples, as training data for such a system, are difficult to obtain in large quantities, particularly in new platforms. Motivated by this challenge, we aim to design innovative methods for training multimodal CRS effectively even in a small data setting. Specifically, assuming the availability of a small number of samples with dialogue states, we devise an effective dialogue state encoder to bridge the semantic gap between conversation and product representations for recommendation. To reduce the cost of dialogue state annotation, a semi-supervised learning method is developed to effectively train the dialogue state encoder with a small set of labeled conversations.
In addition, we design a correlation regularisation that leverages knowledge in the multimodal product database to better align textual and visual modalities. Experiments on the dataset MMD demonstrate the effectiveness of our method. Particularly, with only 5% of the MMD training set, our method (namely SeMANTIC) obtains better NDCG scores than those of baseline models trained on the full MMD training set. | Sample Efficiency Matters: Training Multimodal Conversational Recommendation Systems in a Small Data Setting | [
"Haoyang Su",
"Wenzhe Du",
"Nguyen Cam-Tu",
"Wang Xiaoliang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=x67JXKI3C0 | @inproceedings{
yu2024towards,
title={Towards Efficient and Diverse Generative Model for Unconditional Human Motion Synthesis},
author={Hua Yu and Weiming Liu and Jiapeng Bai and Gui Xu and Yaqing Hou and Yew-Soon Ong and Qiang Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=x67JXKI3C0}
} | Recent generative methods have revolutionized the way of human motion synthesis, such as Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Denoising Diffusion Probabilistic Models (DMs). These methods have gained significant attention in human motion fields. However, there are still challenges in unconditionally generating highly diverse human motions from a given distribution. To enhance the diversity of synthesized human motions, previous methods usually employ deep neural networks (DNNs) to train a transport map that transforms Gaussian noise distribution into real human motion distribution. According to Figalli's regularity theory, the optimal transport map computed by DNNs frequently exhibits discontinuities. This is due to the inherent limitation of DNNs in representing only continuous maps. Consequently, the generated human motions tend to heavily concentrate on densely populated regions of the data distribution, resulting in mode collapse or mode mixture. To address the issues, we propose an efficient method called MOOT for unconditional human motion synthesis. First, we utilize a reconstruction network based on GRU and transformer to map human motions to latent space. Next, we employ convex optimization to map the noise distribution to the latent space distribution of human motions through the Optimal Transport (OT) map. Then, we combine the extended OT map with the generator of reconstruction network to generate new human motions. Thereby overcoming the issues of mode collapse and mode mixture. MOOT generates a latent code distribution that is well-behaved and highly structured, providing a strong motion prior for various applications in the field of human motion. Through qualitative and quantitative experiments, MOOT achieves state-of-the-art results surpassing the latest methods, validating its superiority in unconditional human motion generation. | Towards Efficient and Diverse Generative Model for Unconditional Human Motion Synthesis | [
"Hua Yu",
"Weiming Liu",
"Jiapeng Bai",
"Gui Xu",
"Yaqing Hou",
"Yew-Soon Ong",
"Qiang Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=wzmT6H1HlO | @inproceedings{
liu2024clothaware,
title={Cloth-aware Augmentation for Cloth-generalized Person Re-identification},
author={Fangyi Liu and Mang Ye and Bo Du},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=wzmT6H1HlO}
} | Person re-identification (ReID) is crucial in video surveillance, aiming to match individuals across different camera views while cloth-changing person re-identification (CC-ReID) focuses on pedestrians changing attire. Many existing CC-ReID methods overlook generalization, crucial for universality across cloth-consistent and cloth-changing scenarios. This paper pioneers exploring the cloth-generalized person re-identification (CG-ReID) task and introduces the Cloth-aware Augmentation (CaAug) strategy. Comprising domain augmentation and feature augmentation, CaAug aims to learn identity-relevant features adaptable to both scenarios. Domain augmentation involves creating diverse fictitious domains, simulating various clothing scenarios. Supervising features from different cloth domains enhances robustness and generalization against clothing changes. Additionally, for feature augmentation, element exchange introduces diversity concerning clothing changes. Regularizing the model with these augmented features strengthens resilience against clothing change uncertainty. Extensive experiments on cloth-changing datasets demonstrate the efficacy of our approach, consistently outperforming state-of-the-art methods. Our codes will be publicly released soon. | Cloth-aware Augmentation for Cloth-generalized Person Re-identification | [
"Fangyi Liu",
"Mang Ye",
"Bo Du"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=wucw3dPxkL | @inproceedings{
cui2024advancing,
title={Advancing Prompt Learning through an External Layer},
author={Fangming Cui and Xun Yang and Chao Wu and Liang Xiao and Xinmei Tian},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=wucw3dPxkL}
} | Prompt learning represents a promising method for adapting pre-trained vision-language models (VLMs) to various downstream tasks by learning a set of text embeddings. One challenge inherent to these methods is the poor generalization performance due to the invalidity of the learned text embeddings for unseen tasks. A straightforward approach to bridge this gap is to freeze the text embeddings in prompts, which results in a lack of capacity to adapt VLMs for downstream tasks. To address this dilemma, we propose a paradigm called EnPrompt with a novel External Layer (EnLa). Specifically, we propose a textual external layer and learnable visual embeddings for adapting VLMs to downstream tasks. The learnable external layer is built upon valid embeddings of pre-trained CLIP. This design considers the balance of learning capabilities between the two branches. To align the textual and visual features, we propose a novel two-pronged approach: i) we introduce the optimal transport as the discrepancy metric to align the vision and text modalities, and ii) we introduce a novel strengthening feature to enhance the interaction between these two modalities. Four representative experiments (i.e., base-to-novel generalization, few-shot learning, cross-dataset generalization, domain shifts generalization) across 15 datasets demonstrate that our method outperforms the existing prompt learning method. | Advancing Prompt Learning through an External Layer | [
"Fangming Cui",
"Xun Yang",
"Chao Wu",
"Liang Xiao",
"Xinmei Tian"
] | Conference | poster | 2407.19674 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=wqfOsjwryj | @inproceedings{
lee2024daftgan,
title={{DAFT}-{GAN}: Dual Affine Transformation Generative Adversarial Network for Text-Guided Image Inpainting},
author={Jihoon Lee and Yunhong Min and Hwidong Kim and Sangtae Ahn},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=wqfOsjwryj}
} | In recent years, there has been a significant focus on research related to text-guided image inpainting, which holds a pivotal role in the domain of multimedia processing. This has resulted in notable enhancements in the quality and performance of the generated images. However, the task remains challenging due to several constraints, such as ensuring alignment between the generated images and the accompanying text, and maintaining consistency in distribution between corrupted and uncorrupted regions, for achieving natural and fine-grained image generation. To address these challenges, previous studies developed novel architectures, inpainting techniques, or objective functions but they still lack semantic consistency between the text and generated images. In this paper, thus, we propose a dual affine transformation generative adversarial network (DAFT-GAN) to maintain the semantic consistency for text-guided inpainting. DAFT-GAN integrates two affine transformation networks to combine text and image features gradually for each decoding block. The first affine transformation network leverages global features of the text to generate coarse results, while the second affine network utilizes attention mechanisms and spatial of the text to refine the coarse results. By connecting the features generated from these dual paths through residual connections in the subsequent block, the model retains information at each scale while enhancing the quality of the generated image. Moreover, we minimize information leakage of uncorrupted features for fine-grained image generation by encoding corrupted and uncorrupted regions of the masked image separately. Through extensive experiments, we observe that our proposed model outperforms the existing models in both qualitative and quantitative assessments with three benchmark datasets (MS-COCO, CUB, and Oxford) for text-guided image inpainting. | DAFT-GAN: Dual Affine Transformation Generative Adversarial Network for Text-Guided Image Inpainting | [
"Jihoon Lee",
"Yunhong Min",
"Hwidong Kim",
"Sangtae Ahn"
] | Conference | poster | 2408.04962 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=wadLGs3OpB | @inproceedings{
he2024diffusion,
title={Diffusion Domain Teacher: Diffusion Guided Domain Adaptive Object Detector},
author={Boyong He and Yuxiang Ji and Zhuoyue Tan and Liaoni Wu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=wadLGs3OpB}
} | Object detectors often suffer a decrease in performance due to the large domain gap between the training data (source domain) and real-world data (target domain). Diffusion-based generative models have shown remarkable abilities in generating high-quality and diverse images, suggesting their potential for extracting valuable feature from various domains. To effectively leverage the cross-domain feature representation of diffusion models, in this paper, we train a detector with frozen-weight diffusion model on the source domain, then employ it as a teacher model to generate pseudo labels on the unlabeled target domain, which are used to guide the supervised learning of the student model on the target domain. We refer to this approach as Diffusion Domain Teacher (DDT). By employing this straightforward yet potent framework, we significantly improve cross-domain object detection performance without compromising the inference speed. Our method achieves an average mAP improvement of 21.2% compared to the baseline on 6 datasets from three common cross-domain detection benchmarks (Cross-Camera, Syn2Real, Real2Artistic), surpassing the current state-of-the-art (SOTA) methods by an average of 5.7% mAP. Furthermore, extensive experiments demonstrate that our method consistently brings improvements even in more powerful and complex models, highlighting broadly applicable and effective domain adaptation capability of our DDT. | Diffusion Domain Teacher: Diffusion Guided Domain Adaptive Object Detector | [
"Boyong He",
"Yuxiang Ji",
"Zhuoyue Tan",
"Liaoni Wu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=wZQpCRf8En | @inproceedings{
yan2024fooling,
title={Fooling 3D Face Recognition with One Single 2D Image},
author={Shizong Yan and Shan Chang and Hongzi Zhu and Huixiang Wen and Luo Zhou},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=wZQpCRf8En}
} | 3D face recognition is subject to frequent spoofing attacks, in which 3D face presentation attack is one of the most notorious attacks. The attacker takes advantages of 3D scanning and printing techniques to generate masks of targets, which has found success in numerous real-life examples. The salient feature in such attacks is to obtain 3D face models through 3D scanning, though relatively more expensive and inconvenient when comparing with 2D photos. In this work, we propose a new method, DREAM, to recover 3D face models from single 2D image. Specifically, we adopt a black-box approach, which recovers ‘sufficient’ depths to defeat target recognition models (e.g., face identification and face authentication models) by accessing its output and the corresponding RGB photo. The key observation is that it is not necessary to restore the true value of depths, but only need to recover the essential features relevant to the target model. We used four public 3D face datasets to verify the effectiveness of DREAM. The experimental results show that DREAM can achieve a success rate of 94\% on face authentication model, even in cross-dataset testing, and a success rate of 36\% on face identification model. | Fooling 3D Face Recognition with One Single 2D Image | [
"Shizong Yan",
"Shan Chang",
"Hongzi Zhu",
"Huixiang Wen",
"Luo Zhou"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=wYiRt6gIgc | @inproceedings{
yao2024visual,
title={Visual Grounding with Multi-modal Conditional Adaptation},
author={Ruilin Yao and Shengwu Xiong and Yichen Zhao and Yi Rong},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=wYiRt6gIgc}
} | Visual grounding is the task of locating objects specified by natural language expressions. Existing methods extend generic object detection frameworks to tackle this task. They typically extract visual and textual features separately using independent visual and textual encoders, then fuse these features in a multi-modal decoder for final prediction. However, visual grounding presents unique challenges. It often involves locating objects with different text descriptions within the same image. Existing methods struggle with this task because the independent visual encoder produces identical visual features for the same image, limiting detection performance. Some recently approaches propose various language-guided visual encoders to address this issue, but they mostly rely solely on textual information and require sophisticated designs. In this paper, we introduce Multi-modal Conditional Adaptation (MMCA), which enables the visual encoder to adaptively update weights, directing its focus towards text-relevant regions. Specifically, we first integrate information from different modalities to obtain multi-modal embeddings. Then we utilize a set of weighting coefficients, which generated from the multimodal embeddings, to reorganize the weight update matrices and apply them to the visual encoder of the visual grounding model. Extensive experiments on four widely used datasets demonstrate that MMCA achieves significant improvements and state-of-the-art results. Ablation experiments further demonstrate the lightweight and efficiency of our method. Our source code is
available at: https://github.com/Mr-Bigworth/MMCA. | Visual Grounding with Multi-modal Conditional Adaptation | [
"Ruilin Yao",
"Shengwu Xiong",
"Yichen Zhao",
"Yi Rong"
] | Conference | oral | 2409.04999 | [
"https://github.com/mr-bigworth/mmca"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=wXgGlFsSU6 | @inproceedings{
zeng2024towards,
title={Towards Labeling-free Fine-grained Animal Pose Estimation},
author={Dan Zeng and Yu Zhu and Shuiwang Li and Qijun Zhao and Qiaomu Shen and Bo Tang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=wXgGlFsSU6}
} | In this paper, we are interested in identifying denser and finer animals joints. The lack of standardized joint definitions across various APE datasets, e.g., AnimalPose with 20 joints, AP-10k with 17 joints, and TigDog with 19 joints, presents a significant challenge yet offers an opportunity to fully utilize annotation data. This paper challenges this new non-standardized annotation problem, aiming to learn fine-grained (e.g., 24 or more joints) pose estimators in datasets that lack complete annotations. To combat the unannotated joints, we propose FreeNet, comprising a base network and an adaptation network connected through a circuit feedback learning paradigm. FreeNet enhances the adaptation network's tolerance to unannotated joints via body part-aware learning, optimizing the sampling frequency of joints based on joint detection difficulty, and improves the base network's predictions for unannotated joints using feedback learning. This leverages the cognitive differences of the adaptation network between non-standardized labeled and large-scale unlabeled data. Experimental results on three non-standard datasets demonstrate the effectiveness of our method for fine-grained APE. | Towards Labeling-free Fine-grained Animal Pose Estimation | [
"Dan Zeng",
"Yu Zhu",
"Shuiwang Li",
"Qijun Zhao",
"Qiaomu Shen",
"Bo Tang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=wOx1VBzF8k | @inproceedings{
yan2024taskoriented,
title={Task-Oriented Multi-Bitstream Optimization for Image Compression and Transmission via Optimal Transport},
author={Sa Yan and Nuowen Kan and Chenglin Li and Wenrui Dai and Junni Zou and Hongkai Xiong},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=wOx1VBzF8k}
} | Image compression for machine vision exhibits various rate-accuracy performance across different downstream tasks and content types.
An efficient utilization of constrained network resource for achieving an optimal overall task performance has thus recently attracted a growing attention. In this paper, we propose Tombo, a task-oriented image compression and transmission framework that efficiently identifies the optimal encoding bitrate and routing scheme for multiple image bitstreams delivered simultaneously for different downstream tasks. Specifically, we study the characteristics of image rate-accuracy performance for different machine vision tasks, and formulate the task-oriented joint bitrate and routing optimization problem for multi-bitstreams as a multi-commodity network flow problem with the time-expanded network modeling. To ensure consistency between the encoding bitrate and routing optimization, we also propose an augmented network that incorporates the encoding bitrate variables into the routing variables. To improve computational efficiency, we further convert the original optimization problem to a multi-marginal optimal transport problem, and adopt a Sinkhorn iteration-based algorithm to quickly obtain the near-optimal solution. Finally, we adapt Tombo to efficiently deal with the dynamic network scenario where link capacities may fluctuate over time. Empirical evaluations on three typical machine vision tasks and four real-world network topologies demonstrate that Tombo achieves a comparable performance to the optimal one solved by the off-the-shelf solver Gurobi, with a $5\times \sim 114\times$ speedup. | Task-Oriented Multi-Bitstream Optimization for Image Compression and Transmission via Optimal Transport | [
"Sa Yan",
"Nuowen Kan",
"Chenglin Li",
"Wenrui Dai",
"Junni Zou",
"Hongkai Xiong"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=wOEnVqFQLA | @inproceedings{
chen2024qground,
title={Q-Ground: Image Quality Grounding with Large Multi-modality Models},
author={Chaofeng Chen and Yang Sensen and Haoning Wu and Liang Liao and Zicheng Zhang and Annan Wang and Wenxiu Sun and Qiong Yan and Weisi Lin},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=wOEnVqFQLA}
} | Recent advances of large multi-modality models (LMM) have greatly improved the ability of image quality assessment (IQA) method to evaluate and explain the quality of visual content.
However, these advancements are mostly focused on overall quality assessment, and the detailed examination of local quality, which is crucial for comprehensive visual understanding, is still largely unexplored.
In this work, we introduce **Q-Ground**, the first framework aimed at tackling fine-scale visual quality grounding by combining large multi-modality models with detailed visual quality analysis.
Central to our contribution is the introduction of the **QGround-100K** dataset, a novel resource containing 100k triplets of *(image, quality text, distortion segmentation)* to facilitate deep investigations into visual quality.
The dataset comprises two parts: one with human-labeled annotations for accurate quality assessment, and another labeled automatically by LMMs such as GPT4V, which helps improve the robustness of model training while also reducing the costs of data collection.
With the **QGround-100K** dataset, we propose a LMM-based method equipped with multi-scale feature learning to learn models capable of performing both image quality answering and distortion segmentation based on text prompts. This dual-capability approach not only refines the model's understanding of region-aware image quality but also enables it to interactively respond to complex, text-based queries about image quality and specific distortions.
**Q-Ground** takes a step towards sophisticated visual quality analysis in a finer scale, establishing a new benchmark for future research in the area. Codes and dataset will be made available. | Q-Ground: Image Quality Grounding with Large Multi-modality Models | [
"Chaofeng Chen",
"Yang Sensen",
"Haoning Wu",
"Liang Liao",
"Zicheng Zhang",
"Annan Wang",
"Wenxiu Sun",
"Qiong Yan",
"Weisi Lin"
] | Conference | oral | 2407.17035 | [
"https://github.com/q-future/q-ground"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=wNaXMRWPmq | @inproceedings{
hanziwang2024qmoe,
title={Q-MoE: Connector for {MLLM}s with Text-Driven Routing},
author={Hanziwang and Jiamin Ren and Yifeng Ding and Lei Ren and Huixing Jiang and Chen Wei and Fangxiang Feng and Xiaojie Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=wNaXMRWPmq}
} | Multimodal Large Language Models (MLLMs) have showcased remarkable advances in handling various vision-language tasks. These models typically consist of a Large Language Model (LLM), a vision encoder and a connector structure, which is used to bridge the modality gap between vision and language. It is challenging for the connector to filter the right visual information for LLM according to
the task in hand. Most of the previous connectors, such as light-weight projection and Q-former, treat visual information for diverse tasks uniformly, therefore lacking task-specific visual information extraction capabilities. To address the issue, this paper proposes Q-MoE, a query-based connector with Mixture-of-Experts (MoE) to extract task-specific information with text-driven routing. Furthermore,
an optimal path based training strategy is also proposed to find an optimal expert combination. Extensive experiments on two popular open-source LLMs and several different visual-language tasks demonstrate the effectiveness of the Q-MoE connecter. We will open our codes upon publication. | Q-MoE: Connector for MLLMs with Text-Driven Routing | [
"Hanziwang",
"Jiamin Ren",
"Yifeng Ding",
"Lei Ren",
"Huixing Jiang",
"Chen Wei",
"Fangxiang Feng",
"Xiaojie Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=wKZmm6OPoy | @inproceedings{
liu2024groot,
title={{GROOT}: Generating Robust Watermark for Diffusion-Model-Based Audio Synthesis},
author={Weizhi Liu and Yue Li and Dongdong Lin and Hui Tian and Haizhou Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=wKZmm6OPoy}
} | Amid the burgeoning development of generative models like diffusion models, the task of differentiating synthesized audio from its natural counterpart grows more daunting. Deepfake detection offers a viable solution to combat this challenge. Yet, this defensive measure unintentionally fuels the continued refinement of generative models. Watermarking emerges as a proactive and sustainable tactic, preemptively regulating the creation and dissemination of synthesized content. Thus, this paper, as a pioneer, proposes the generative robust audio watermarking method (Groot), presenting a paradigm for proactively supervising the synthesized audio and its source diffusion models. In this paradigm, the processes of watermark generation and audio synthesis occur simultaneously, facilitated by parameter-fixed diffusion models equipped with a dedicated encoder. The watermark embedded within the audio can subsequently be retrieved by a lightweight decoder. The experimental results highlight Groot's outstanding performance, particularly in terms of robustness, surpassing that of the leading state-of-the-art methods. Beyond its impressive resilience against individual post-processing attacks, Groot exhibits exceptional robustness when facing compound attacks, maintaining an average watermark extraction accuracy of around 95%. | GROOT: Generating Robust Watermark for Diffusion-Model-Based Audio Synthesis | [
"Weizhi Liu",
"Yue Li",
"Dongdong Lin",
"Hui Tian",
"Haizhou Li"
] | Conference | poster | 2407.10471 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=wGxjuuCbjk | @inproceedings{
xie2024special,
title={''Special Relativity'' of Image Aesthetics Assessment: a Preliminary Empirical Perspective},
author={Rui Xie and Anlong Ming and Shuai He and Yi Xiao and Huadong Ma},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=wGxjuuCbjk}
} | Image aesthetics assessment (IAA) primarily examines image quality from a user-centric perspective and can be applied to guide various applications, including image capture, recommendation, and enhancement.
The fundamental issue in IAA revolves around the quantification of image aesthetics.
Existing methodologies rely on assigning a scalar (or a distribution) to represent aesthetic value based on conventional practices, which confines this scalar within a specific range and artificially labels it.
However, conventional methods rarely incorporate research on interpretability, particularly lacking systematic responses to the following three fundamental questions:
1) Can aesthetic qualities be quantified?
2) What is the nature of quantifying aesthetics?
3) How can aesthetics be accurately quantified?
In this paper, we present a law called "Special Relativity" of IAA (SR-IAA) that addresses the aforementioned core questions. We have developed a Multi-Attribute IAA Framework (MAINet), which serves as a preliminary validation for SR-IAA within the existing datasets and achieves state-of-the-art (SOTA) performance. Specifically, our metrics on multi-attribute assessment outperform the second-best performance by 8.06% (AADB), 1.67% (PARA), and 2.44% (SPAQ) in terms of SRCC. We anticipate that our research will offer innovative theoretical guidance to the IAA research community. Codes are available in the supplementary material. | "Special Relativity" of Image Aesthetics Assessment: a Preliminary Empirical Perspective | [
"Rui Xie",
"Anlong Ming",
"Shuai He",
"Yi Xiao",
"Huadong Ma"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=wEeW5wzBT2 | @inproceedings{
feng2024uudata,
title={U2{UD}ata: A Large-scale Cooperative Perception Dataset for Swarm {UAV}s Autonomous Flight},
author={Tongtong Feng and Xin Wang and Feilin Han and Leping Zhang and Wenwu Zhu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=wEeW5wzBT2}
} | Modern perception systems for autonomous flight are sensitive to occlusion and have limited long-range capability, which is a key bottleneck in improving low-altitude economic task performance. Recent research has shown that the UAV-to-UAV (U2U) cooperative perception system has great potential to revolutionize the autonomous flight industry. However, the lack of a large-scale dataset is hindering progress in this area. This paper presents U2UData, the first large-scale cooperative perception dataset for swarm UAVs autonomous flight. The dataset was collected by three UAVs flying autonomously in the U2USim, covering a 9 km$^2$ flight area. It comprises 315K LiDAR frames, 945K RGB and depth frames, and 2.41M annotated 3D bounding boxes for 3 classes. It also includes brightness, temperature, humidity, smoke, and airflow values covering all flight routes. U2USim is the first real-world mapping swarm UAVs simulation environment. It takes Yunnan Province as the prototype and includes 4 terrains, 7 weather conditions, and 8 sensor types. U2UData introduces two perception tasks: cooperative 3D object detection and cooperative 3D object tracking. This paper provides comprehensive benchmarks of recent cooperative perception algorithms on these tasks. | U2UData: A Large-scale Cooperative Perception Dataset for Swarm UAVs Autonomous Flight | [
"Tongtong Feng",
"Xin Wang",
"Feilin Han",
"Leping Zhang",
"Wenwu Zhu"
] | Conference | oral | 2408.00606 | [
"https://github.com/fengtt42/u2udata"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=wE3WS15pM2 | @inproceedings{
lu2024miko,
title={{MIKO}: Multimodal Intention Knowledge Distillation from Large Language Models for Social-Media Commonsense Discovery},
author={Feihong Lu and Weiqi Wang and Yangyifei Luo and Ziqin Zhu and Qingyun Sun and Baixuan Xu and Haochen Shi and Shiqi Gao and Qian Li and Yangqiu Song and Jianxin Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=wE3WS15pM2}
} | Social media has become ubiquitous for connecting with others, staying updated with news, expressing opinions, and finding entertainment. However, understanding the intention behind social media posts remains challenging due to the implicit and commonsense nature of these intentions, the need for cross-modality understanding of both text and images, and the presence of noisy information such as hashtags, misspelled words, and complicated abbreviations. To address these challenges, we present MIKO, a Multimodal Intention Knowledge DistillatiOn framework that collaboratively leverages a Large Language Model (LLM) and a Multimodal Large Language Model (MLLM) to uncover users' intentions. Specifically, our approach uses an MLLM to interpret the image, an LLM to extract key information from the text, and another LLM to generate intentions. By applying MIKO to publicly available social media datasets, we construct an intention knowledge base featuring 1,372K intentions rooted in 137,287 posts. Moreover, We conduct a two-stage annotation to verify the quality of the generated knowledge and benchmark the performance of widely used LLMs for intention generation, and further apply MIKO to a sarcasm detection dataset and distill a student model to demonstrate the downstream benefits of applying intention knowledge. | MIKO: Multimodal Intention Knowledge Distillation from Large Language Models for Social-Media Commonsense Discovery | [
"Feihong Lu",
"Weiqi Wang",
"Yangyifei Luo",
"Ziqin Zhu",
"Qingyun Sun",
"Baixuan Xu",
"Haochen Shi",
"Shiqi Gao",
"Qian Li",
"Yangqiu Song",
"Jianxin Li"
] | Conference | poster | 2402.18169 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=w6wIP4n9tl | @inproceedings{
peng2024glgait,
title={{GLG}ait: A Global-Local Temporal Receptive Field Network for Gait Recognition in the Wild},
author={Guozhen Peng and Yunhong Wang and Yuwei Zhao and Shaoxiong Zhang and Annan Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=w6wIP4n9tl}
} | Gait recognition has attracted increasing attention from academia and industry as a human recognition technology from a distance in non-intrusive ways without requiring cooperation. Although advanced methods have achieved impressive success in lab scenarios, most of them perform poorly in the wild. Recently, some Convolution Neural Networks (ConvNets) based methods have been proposed to address the issue of gait recognition in the wild. However, the temporal receptive field obtained by convolution operations is limited for long gait sequences. If directly replacing convolution blocks with visual transformer blocks, the model may not enhance a local temporal receptive field, which is important for covering a complete gait cycle. To address this issue, we design a Global-Local Temporal Receptive Field Network (GLGait). GLGait employs a Global-Local Temporal Module (GLTM) to establish a global-local temporal receptive field, which mainly consists of a Pseudo Global Temporal Self-Attention (PGTA) and a temporal convolution operation. Specifically, PGTA is used to obtain a pseudo global temporal receptive field with less memory and computation complexity compared with a multi-head self-attention (MHSA). The temporal convolution operation is used to enhance the local temporal receptive field. Besides, it can also aggregate pseudo global temporal receptive field to a true holistic temporal receptive field. Furthermore, we also propose a Center-Augmented Triplet Loss (CTL) in GLGait to reduce the intra-class distance and expand the positive samples in the training stage. Extensive experiments show that our method obtains state-of-the-art results on in-the-wild datasets, $i.e.$, Gait3D and GREW. The code is available at https://github.com/bgdpgz/GLGait. | GLGait: A Global-Local Temporal Receptive Field Network for Gait Recognition in the Wild | [
"Guozhen Peng",
"Yunhong Wang",
"Yuwei Zhao",
"Shaoxiong Zhang",
"Annan Li"
] | Conference | poster | 2408.06834 | [
"https://github.com/bgdpgz/glgait"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=w6rJxukAap | @inproceedings{
wang2024rffnet,
title={{RFFN}et: Towards Robust and Flexible Fusion for Low-Light Image Denoising},
author={Qiang Wang and Yuning Cui and Yawen Li and paulruan and zhuben and Wenqi Ren},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=w6rJxukAap}
} | Low-light environments will introduce high-intensity noise into images. Containing fine details with reduced noise, near-infrared/flash images can serve as guidance to facilitate noise removal.
However, existing fusion-based methods fail to effectively suppress artifacts caused by inconsistency between guidance/noisy image pairs and do not fully excavate the useful information contained in guidance images. In this paper, we propose a robust and flexible fusion network (RFFNet) for low-light image denoising. Specifically, we present a multi-scale inconsistency calibration module to address inconsistency before fusion by first mapping the guidance features to multi-scale spaces and calibrating them with the aid of pre-denoising features in a coarse-to-fine manner. Furthermore, we develop a dual-domain adaptive fusion module to adaptively extract useful high-/low-frequency signals from the guidance features and then highlight the informative frequencies.
Extensive experimental results demonstrate that our method achieves state-of-the-art performance on NIR-guided RGB image denoising and flash-guided no-flash image denoising. | RFFNet: Towards Robust and Flexible Fusion for Low-Light Image Denoising | [
"Qiang Wang",
"Yuning Cui",
"Yawen Li",
"paulruan",
"zhuben",
"Wenqi Ren"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=w24GYMUtN5 | @inproceedings{
niu2024neural,
title={Neural Boneprint: Person Identification from Bones using Generative Contrastive Deep Learning},
author={Chaoqun Niu and Dongdong Chen and Ji-Zhe Zhou and Jian Wang and Xiang Luo and Quan-Hui Liu and YUAN LI and Jiancheng Lv},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=w24GYMUtN5}
} | Forensic person identification is of paramount importance in accidents and criminal investigations. Existing methods based on soft tissue or DNA can be unavailable if the body is badly decomposed, white-ossified, or charred. However, bones last a long time. This raises a natural question: ***can we learn to identify a person using bone data?***
We present a novel feature of bones called ***Neural Boneprint*** for personal identification. In particular, we exploit the thoracic skeletal data including chest radiographs (CXRs) and computed tomography (CT) images enhanced by the volume rendering technique (VRT) as an example to explore the availability of the neural boneprint. We then represent the neural boneprint as a joint latent embedding of VRT images and CXRs through a bidirectional cross-modality translation and contrastive learning. Preliminary experimental results on real skeletal data demonstrate the effectiveness of the Neural Boneprint for identification. We hope that this approach will provide a promising alternative for challenging forensic cases where conventional methods are limited. The code will be available at ***. | Neural Boneprint: Person Identification from Bones using Generative Contrastive Deep Learning | [
"Chaoqun Niu",
"Dongdong Chen",
"Ji-Zhe Zhou",
"Jian Wang",
"Xiang Luo",
"Quan-Hui Liu",
"YUAN LI",
"Jiancheng Lv"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=w1fvy1Bhqj | @inproceedings{
zhong2024promote,
title={{PROMOTE}: Prior-Guided Diffusion Model with Global-Local Contrastive Learning for Exemplar-Based Image Translation},
author={Guojin Zhong and YIHU GUO and Jin Yuan and Qianjun Zhang and Weili Guan and Long Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=w1fvy1Bhqj}
} | Exemplar-based image translation has garnered significant interest from researchers due to its broad applications in multimedia/multimodal processing. Existing methods primarily employ Euclidean-based losses to implicitly establish cross-domain correspondences between exemplar and conditional images, aiming to produce high-fidelity images. However, these methods often suffer from two challenges: 1) Insufficient excavation of domain-invariant features leads to low-quality cross-domain correspondences, and 2) Inaccurate correspondences result in errors propagated during the translation process due to a lack of reliable prior guidance. To tackle these issues, we propose a novel prior-guided diffusion model with global-local contrastive learning (PROMOTE), which is trained in a self-supervised manner. Technically, global-local contrastive learning is designed to align two cross-domain images within hyperbolic space and reduce the gap between their semantic correlation distributions using the Fisher-Rao metric, allowing the visual encoders to extract domain-invariant features more effectively. Moreover, a prior-guided diffusion model is developed that propagates the structural prior to all timesteps in the diffusion process. It is optimized by a novel prior denoising loss, mathematically derived from the transitions modified by prior information in a self-supervised manner, successfully alleviating the impact of inaccurate correspondences on image translation. Extensive experiments conducted across seven datasets demonstrate that our proposed PROMOTE significantly exceeds state-of-the-art performance in diverse exemplar-based image translation tasks. | PROMOTE: Prior-Guided Diffusion Model with Global-Local Contrastive Learning for Exemplar-Based Image Translation | [
"Guojin Zhong",
"YIHU GUO",
"Jin Yuan",
"Qianjun Zhang",
"Weili Guan",
"Long Chen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=vwqjDXmFss | @inproceedings{
gao2024fact,
title={Fact: Teaching {MLLM}s with Faithful, Concise and Transferable Rationales},
author={Minghe Gao and Shuang Chen and Liang Pang and Yuan Yao and Jisheng Dang and Wenqiao Zhang and Juncheng Li and Siliang Tang and Yueting Zhuang and Tat-Seng Chua},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=vwqjDXmFss}
} | The remarkable performance of Multimodal Large Language Models (MLLMs) has unequivocally demonstrated their proficient understanding capabilities in handling a wide array of visual tasks. Nevertheless, the opaque nature of their black-box reasoning processes persists as an enigma, rendering them uninterpretable and struggling with hallucination. Their ability to execute intricate compositional reasoning tasks is also constrained, culminating in a stagnation of learning progression for these models. In this work, we introduce Fact, a novel paradigm designed to generate multimodal rationales that are faithful, concise, and transferable for teaching MLLMs. This paradigm utilizes verifiable visual programming to generate executable code guaranteeing faithfulness and precision.
Subsequently, through a series of operations including pruning, merging, and bridging, the rationale enhances its conciseness.
Furthermore, we filter rationales that can be transferred to end-to-end paradigms from programming paradigms to guarantee transferability. Empirical evidence from experiments demonstrates the superiority of our method across models of varying parameter sizes, significantly enhancing their compositional reasoning and generalization ability. Our approach also reduces hallucinations owing to its high correlation between images and text. The anonymous project is available at: https://anonymous.4open.science/r/Fact_program-216D/ | Fact: Teaching MLLMs with Faithful, Concise and Transferable Rationales | [
"Minghe Gao",
"Shuang Chen",
"Liang Pang",
"Yuan Yao",
"Jisheng Dang",
"Wenqiao Zhang",
"Juncheng Li",
"Siliang Tang",
"Yueting Zhuang",
"Tat-Seng Chua"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=vw0uV99zWo | @inproceedings{
yin2024exploring,
title={Exploring Data Efficiency in Image Restoration: A Gaussian Denoising Case Study},
author={Zhengwei Yin and Mingze MA and Guixu Lin and Yinqiang Zheng},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=vw0uV99zWo}
} | Amidst the prevailing trend of escalating demands for data and computational resources, the efficiency of data utilization emerges as a critical lever for enhancing the performance of deep learning models, especially in the realm of image restoration tasks. This investigation delves into the intricacies of data efficiency in the context of image restoration, with Gaussian image denoising serving as a case study. We postulate a strong correlation between the model's performance and the content information encapsulated in the training images. This hypothesis is rigorously tested through experiments conducted on synthetically blurred datasets. Building on this premise, we delve into the data efficiency within training datasets and introduce an effective and stabilized method for quantifying content information, thereby enabling the ranking of training images based on their influence. Our in-depth analysis sheds light on the impact of various subset selection strategies, informed by this ranking, on model performance. Furthermore, we examine the transferability of these efficient subsets across disparate network architectures. The findings underscore the potential to achieve comparable, if not superior, performance with a fraction of the data—highlighting instances where training IRCNN and Restormer models with only 3.89% and 2.30% of the data resulted in a negligible drop and, in some cases, a slight improvement in PSNR. This investigation offers valuable insights and methodologies to address data efficiency challenges in Gaussian denoising. Similarly, our method yields comparable conclusions in other restoration tasks. We believe this will be beneficial for future research. Codes will be available at [URL]. | Exploring Data Efficiency in Image Restoration: A Gaussian Denoising Case Study | [
"Zhengwei Yin",
"Mingze MA",
"Guixu Lin",
"Yinqiang Zheng"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=vuVNpareuf | @inproceedings{
zhai2024anfluid,
title={\${ANF}luid: Animate Natural Fluid Photos base on Physics-Aware Simulation and Dual-Flow Texture Learning\$},
author={Xiangcheng Zhai and Yingqi Jie and Xueguang Xie and Aimin Hao and Na Jiang and Yang Gao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=vuVNpareuf}
} | Generating photorealistic animations from a single still photo represents a significant advancement in multimedia editing and artistic creation. While existing AIGC methods have reached milestone successes, they often struggle with maintaining consistency with real-world physical laws, particularly in fluid dynamics. To address this issue, this paper introduces ANFluid, a physics solver and data-driven coupled framework that combines physics-aware simulation (PAS) and dual-flow texture learning (DFTL) to animate natural fluid photos effectively. The PAS component of ANFluid ensures that motion guides adhere to physical laws, and can be automatically tailored with specific numerical solver to meet the diversities of different fluid scenes. Concurrently, DFTL focuses on enhancing texture prediction. It employs bidirectional self-supervised optical flow estimation and multi-scale wrapping to strengthen dynamic relationships and elevate the overall animation quality. Notably, despite being built on a transformer architecture, the innovative encoder-decoder design in DFTL does not increase the parameter count but rather enhances inference efficiency. Extensive quantitative experiments have shown that our ANFluid surpasses most current methods on the Holynski and CLAW datasets. User studies further confirm that animations produced by ANFluid maintain better physical and content consistency with the real world and the original input, respectively. Moreover, ANFluid supports interactive editing during the simulation process, enriching the animation content and broadening its application potential. | ANFluid: Animate Natural Fluid Photos base on Physics-Aware Simulation and Dual-Flow Texture Learning | [
"Xiangcheng Zhai",
"Yingqi Jie",
"Xueguang Xie",
"Aimin Hao",
"Na Jiang",
"Yang Gao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=vtpwJob0L1 | @inproceedings{
li2024diversified,
title={Diversified Semantic Distribution Matching for Dataset Distillation},
author={Hongcheng Li and Yucan Zhou and Xiaoyan Gu and Bo Li and Weiping Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=vtpwJob0L1}
} | Dataset distillation, also known as dataset condensation, offers a possibility for compressing a large-scale dataset into a small-scale one (i.e., distilled dataset) while achieving similar performance during model training. This method effectively tackles the challenges of training efficiency and storage cost posed by the large-scale dataset. Existing dataset distillation methods can be categorized into Optimization-Oriented (OO)-based and Distribution-Matching (DM)-based methods. Since OO-based methods require bi-level optimization to alternately optimize the model and the distilled data, they face challenges due to high computational overhead in practical applications. Thus, DM-based methods have emerged as an alternative by aligning the prototypes of the distilled data to those of the original data. Although efficient, these methods overlook the diversity of the distilled data, which will limit the performance of evaluation tasks. In this paper, we propose a novel Diversified Semantic Distribution Matching (DSDM) approach for dataset distillation. To accurately capture semantic features, we first pre-train models for dataset distillation. Subsequently, we estimate the distribution of each category by calculating its prototype and covariance matrix, where the covariance matrix indicates the direction of semantic feature transformations for each category. Then, in addition to the prototypes, the covariance matrices are also matched to obtain more diversity for the distilled data. However, since the distilled data are optimized by multiple pre-trained models, the training process will fluctuate severely. Therefore, we match the distilled data of the current pre-trained model with the historical integrated prototypes. Experimental results demonstrate that our DSDM achieves state-of-the-art results on both image and speech datasets. Codes will be released soon. | Diversified Semantic Distribution Matching for Dataset Distillation | [
"Hongcheng Li",
"Yucan Zhou",
"Xiaoyan Gu",
"Bo Li",
"Weiping Wang"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=vscynaeAJn | @inproceedings{
zhang2024narrowing,
title={Narrowing the Gap between Vision and Action in Navigation},
author={Yue Zhang and Parisa Kordjamshidi},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=vscynaeAJn}
} | The existing methods for Vision and Language Navigation in the Continuous Environment (VLN-CE) commonly incorporate a waypoint predictor to discretize the environment. This simplifies the navigation actions into a view selection task and improves navigation performance significantly compared to direct training using low-level actions.
However, the VLN-CE agents are still far from the real robots since there are gaps between their visual perception and executed actions.
First, VLN-CE agents that discretize the visual environment are primarily trained with high-level view selection, which causes them to ignore crucial spatial reasoning within the low-level action movements. Second, in these models, the existing waypoint predictors neglect object semantics and their attributes related to passibility, which can be informative in indicating the feasibility of actions.
To address these two issues, we introduce a low-level action decoder jointly trained with high-level action prediction, enabling the current VLN agent to learn and ground the selected visual view to the low-level controls. Moreover, we enhance the current waypoint predictor by utilizing visual representations containing rich semantic information and explicitly masking obstacles based on humans' prior knowledge about the feasibility of actions. Empirically, our agent can improve navigation performance metrics compared to the strong baselines on both high-level and low-level actions. | Narrowing the Gap between Vision and Action in Navigation | [
"Yue Zhang",
"Parisa Kordjamshidi"
] | Conference | poster | 2408.10388 | [
"https://github.com/HLR/Dual-Action-VLN-CE"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=vngElHOj2N | @inproceedings{
yu2024zeroshot,
title={Zero-Shot Controllable Image-to-Video Animation via Motion Decomposition},
author={Shoubin Yu and Jacob Zhiyuan Fang and Jian Zheng and Gunnar A Sigurdsson and Vicente Ordonez and Robinson Piramuthu and Mohit Bansal},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=vngElHOj2N}
} | In this paper, we introduce a new challenging task called Zero-shot Controllable Image-to-Video Animation, where the goal is to animate an image based on motion trajectories defined by the user, without fine-tuning the base model. Primary challenges include maintaining consistency of background, consistency of object in motion, faithfulness to the user-defined trajectory, and quality of motion animation. We also introduce a novel approach for this task, leveraging diffusion models called IMG2VIDANIM-ZERO (IVA0). IVA0 tackles our controllable Image-to-Video (I2V) task by decomposing it into two subtasks: ‘out-of-place’ and ‘in-place’ motion animation. Due to this decomposition, IVA0 can leverage existing work on layout-conditioned image generation for out-of-place motion generation, and existing text-conditioned video generation methods for in-place motion animation, thus facilitating zero-shot generation. Our model also addresses key challenges for controllable animation, such as Layout Conditioning via Spatio-Temporal Masking to incorporate user guidance and Motion Afterimage Suppression (MAS) scheme to reduce object ghosting during out-of-place animation. Finally, we design a novel controllable I2V benchmark featuring diverse local- and global-level metrics. Results show IVA0 as a new state-of-the-art, establishing a new standard for the zero-shot controllable I2V task. Our method highlights the simplicity and effectiveness of task decomposition and modularization for this novel task for future studies. | Zero-Shot Controllable Image-to-Video Animation via Motion Decomposition | [
"Shoubin Yu",
"Jacob Zhiyuan Fang",
"Jian Zheng",
"Gunnar A Sigurdsson",
"Vicente Ordonez",
"Robinson Piramuthu",
"Mohit Bansal"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=vnK9FnrCME | @inproceedings{
li2024loopgaussian,
title={LoopGaussian: Creating 3D Cinemagraph with Multi-view Images via Eulerian Motion Field},
author={Jiyang Li and Lechao Cheng and Zhangye Wang and Tingting Mu and Jingxuan He},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=vnK9FnrCME}
} | Cinemagraph is a unique form of visual media that combines elements of still photography and subtle motion to create a captivating experience. However, the majority of videos generated by recent works lack depth information and are confined to the constraints of 2D image space. In this paper, inspired by significant progress in the field of novel view synthesis (NVS) achieved by 3D Gaussian Splatting (3D-GS), we propose \textbf{\textit{LoopGaussian}} to elevate cinemagraph from 2D image space to 3D space using 3D Gaussian modeling. To achieve this, we first employ the 3D-GS method to reconstruct 3D Gaussian point clouds from multi-view images of static scenes, incorporating shape regularization terms to prevent blurring or artifacts caused by object deformation. We then adopt an autoencoder tailored for 3D Gaussian to project it into feature space. To maintain the local continuity of the scene, we devise SuperGaussian for clustering based on the acquired features. By calculating the similarity between clusters and employing a two-stage estimation method, we derive an Eulerian motion field to describe velocities across the entire scene. The 3D Gaussian points then move within the estimated Eulerian motion field. Through bidirectional animation techniques, we ultimately generate a 3D Cinemagraph that exhibits natural and seamlessly loopable dynamics. Experiment results validate the effectiveness of our approach, demonstrating high-quality and visually appealing scene generation. | LoopGaussian: Creating 3D Cinemagraph with Multi-view Images via Eulerian Motion Field | [
"Jiyang Li",
"Lechao Cheng",
"Zhangye Wang",
"Tingting Mu",
"Jingxuan He"
] | Conference | oral | 2404.08966 | [
"https://github.com/Pokerlishao/LoopGaussian"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=vmwFR2o8Ja | @inproceedings{
wu2024bridging,
title={Bridging Visual Affective Gap: Borrowing Textual Knowledge by Learning from Noisy Image-Text Pairs},
author={Daiqing Wu and Dongbao Yang and Yu Zhou and Can Ma},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=vmwFR2o8Ja}
} | Visual emotion recognition (VER) is a longstanding field that has garnered increasing attention with the advancement of deep neural networks. Although recent studies have achieved notable improvements by leveraging the knowledge embedded within pre-trained visual models, the lack of direct association between factual-level features and emotional categories, called the ''affective gap'', limits the applicability of pre-training knowledge for VER tasks. On the contrary, the explicit emotional expression and high information density in textual modality eliminate the ''affective gap''. Therefore, we propose borrowing the knowledge from the pre-trained textual model to enhance the emotional perception of pre-trained visual models. We focus on the factual and emotional connections between images and texts in noisy social media data, and propose Partitioned Adaptive Contrastive Learning (PACL) to leverage these connections. Specifically, we manage to separate different types of samples and devise distinct contrastive learning strategies for each type. By dynamically constructing negative and positive pairs, we fully exploit the potential of noisy samples. Through comprehensive experiments, we demonstrate that bridging the ''affective gap'' significantly improves the performance of various pre-trained visual models in downstream emotion-related tasks. | Bridging Visual Affective Gap: Borrowing Textual Knowledge by Learning from Noisy Image-Text Pairs | [
"Daiqing Wu",
"Dongbao Yang",
"Yu Zhou",
"Can Ma"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=vite2JNeiD | @inproceedings{
zeng2024hicescore,
title={{HICES}core: A Hierarchical Metric for Image Captioning Evaluation},
author={Zequn Zeng and Jianqiao Sun and Hao Zhang and Tiansheng Wen and Yudi Su and Yan Xie and Zhengjue Wang and Bo Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=vite2JNeiD}
} | Image captioning evaluation metrics can be divided into two categories: reference-based metrics and reference-free metrics. However, reference-based approaches may struggle to evaluate descriptive captions with abundant visual details produced by advanced multimodal large language models, due to their heavy reliance on limited human-annotated references. In contrast, previous reference-free metrics have been proven effective via CLIP cross-modality similarity. Nonetheless, CLIP-based metrics, constrained by their solution of global image-text compatibility, often have a deficiency in detecting local textual hallucinations and are insensitive to small visual objects. Besides, their single-scale designs are unable to provide an interpretable evaluation process such as pinpointing the position of caption mistakes and identifying visual regions that have not been described. To move forward, we propose a novel reference-free metric for image captioning evaluation, dubbed Hierarchical Image Captioning Evaluation Score (HICE-S). By detecting local visual regions and textual phrases, HICE-S builds an interpretable hierarchical scoring mechanism, breaking through the barriers of the single-scale structure of existing reference-free metrics. Comprehensive experiments indicate that our proposed metric achieves the SOTA performance on several benchmarks, outperforming existing reference-free metrics like CLIP-S and PAC-S, and reference-based metrics like METEOR and CIDEr. Moreover, several case studies reveal that the assessment process of HICE-S on detailed texts closely resembles interpretable human judgments. The code is available in the supplementary. | HICEScore: A Hierarchical Metric for Image Captioning Evaluation | [
"Zequn Zeng",
"Jianqiao Sun",
"Hao Zhang",
"Tiansheng Wen",
"Yudi Su",
"Yan Xie",
"Zhengjue Wang",
"Bo Chen"
] | Conference | poster | 2407.18589 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=vei7UOwWIz | @inproceedings{
feng2024clipcleaner,
title={{CLIPC}leaner: Cleaning Noisy Labels with {CLIP}},
author={Chen Feng and Georgios Tzimiropoulos and Ioannis Patras},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=vei7UOwWIz}
} | Learning with Noisy labels (LNL) poses a significant challenge for the Machine Learning community. Some of the most widely used approaches that select as clean samples for which the model itself (the in-training model) has high confidence, e.g., 'small loss', can suffer from the so called 'self-confirmation' bias. This bias arises because the in-training model, is at least partially trained on the noisy labels. Furthermore, in the classification case, an additional challenge arises because some of the label noise is between classes that are visually very similar ('hard noise').
This paper addresses these challenges by proposing a method (*CLIPCleaner*) that leverages CLIP, a powerful Vision-Language (VL) model for constructing a zero-shot classifier for efficient, offline, clean sample selection.
This has the advantage that the sample selection is decoupled from the in-training model and that the sample selection is aware of the semantic and visual similarities between the classes due to the way that CLIP is trained. We provide theoretical justifications and empirical evidence to demonstrate the advantages of CLIP for LNL compared to conventional pre-trained models.
Compared to current methods that combine iterative sample selection with various techniques, *CLIPCleaner* offers a simple, single-step approach that achieves competitive or superior performance on benchmark datasets. To the best of our knowledge, this is the first time a VL model has been used for sample selection to address the problem of Learning with Noisy Labels (LNL), highlighting their potential in the domain. | CLIPCleaner: Cleaning Noisy Labels with CLIP | [
"Chen Feng",
"Georgios Tzimiropoulos",
"Ioannis Patras"
] | Conference | poster | 2408.10012 | [
"https://github.com/mrchenfeng/clipcleaner_acmmm2024"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=veCRQAhFN7 | @inproceedings{
zhao2024guidednet,
title={GuidedNet: Semi-Supervised Multi-Organ Segmentation via Labeled Data Guide Unlabeled Data},
author={Haochen Zhao and Hui Meng and Deqian Yang and Xiexiao zheng and Xiaoze Wu and Qingfeng Li and Jianwei Niu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=veCRQAhFN7}
} | Semi-supervised multi-organ medical image segmentation aids physicians in improving disease diagnosis and treatment planning and reduces the time and effort required for organ annotation. Existing state-of-the-art methods train the labeled data with ground truths and train the unlabeled data with pseudo-labels. However, the two training flows are separate, which does not reflect the interrelationship between labeled and unlabeled data. To address this issue, we propose a semi-supervised multi-organ segmentation method called GuidedNet, which leverages the knowledge from labeled data to guide the training of unlabeled data. The primary goals of this study are to improve the quality of pseudo-labels for unlabeled data and to enhance the network's learning capability for both small and complex organs. A key concept is that voxel features from labeled and unlabeled data that are close to each other in the feature space are more likely to belong to the same class. On this basis, a 3D Consistent Gaussian Mixture Model (3D-CGMM) is designed to leverage the feature distributions from labeled data to rectify the generated pseudo-labels. Furthermore, we introduce a Knowledge Transfer Cross Pseudo Supervision (KT-CPS) strategy, which leverages the prior knowledge obtained from the labeled data to guide the training of the unlabeled data, thereby improving the segmentation accuracy for both small and complex organs. Extensive experiments on two public datasets, FLARE22 and AMOS, demonstrated that GuidedNet is capable of achieving state-of-the-art performance. | GuidedNet: Semi-Supervised Multi-Organ Segmentation via Labeled Data Guide Unlabeled Data | [
"Haochen Zhao",
"Hui Meng",
"Deqian Yang",
"Xiexiao zheng",
"Xiaoze Wu",
"Qingfeng Li",
"Jianwei Niu"
] | Conference | poster | 2408.04914 | [
"https://github.com/kimjisoo12/guidednet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=vaarOxGEU8 | @inproceedings{
wang2024mfrgn,
title={{MFRGN}: Multi-scale Feature Representation Generalization Network For Ground-to-Aerial Geo-localization},
author={Yuntao Wang and Jinpu Zhang and Ruonan Wei and Wenbo Gao and Yuehuan Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=vaarOxGEU8}
} | Cross-area evaluation poses a significant challenge for ground-to-aerial geo-localization (G2AGL), in which the training and testing data are captured from entirely distinct areas.
However, current methods struggle in cross-area evaluation due to their emphasis solely on learning global information from single-scale features. Some efforts alleviate this problem but rely on complex and specific technologies like pre-processing and hard sample mining. To this end, we propose a pure end-to-end solution, free from task-specific techniques, termed the Multi-scale Feature Representation Generalization Network (MFRGN) to improve generalization. Specifically, we introduce multi-scale features and explicitly utilize them for G2GAL. Furthermore, we devise an efficient global-local information module with two flows to bolster feature representations. In the global flow, we present a lightweight Self and Cross Attention Module (SCAM) to efficiently learn global embeddings. In the local flow, we develop a Global-Prompt Attention Block (GPAB) to capture discriminative features under the global embeddings as prompts.
As a result, our approach generates robust descriptors representing multi-scale global and local information, thereby enhancing the model's invariance to scene variations.
Extensive experiments on benchmarks show our MFRGN achieves competitive performance in same-area evaluation and improves cross-area generalization by a significant margin compared to SOTA methods. | MFRGN: Multi-scale Feature Representation Generalization Network For Ground-to-Aerial Geo-localization | [
"Yuntao Wang",
"Jinpu Zhang",
"Ruonan Wei",
"Wenbo Gao",
"Yuehuan Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=vW5070FoXi | @inproceedings{
chan2024point,
title={Point Cloud Densification for 3D Gaussian Splatting from Sparse Input Views},
author={Kin-Chung Chan and Jun Xiao and Hana Lebeta Goshu and Kin-man Lam},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=vW5070FoXi}
} | The technique of 3D Gaussian splatting (3DGS) has demonstrated its effectiveness and efficiency in rendering photo-realistic images for novel view synthesis. However, 3DGS requires a high density of camera coverage, and its performance inevitably degrades with sparse training views, which significantly restricts its applications in real-world products. In recent years, many researchers have tried to use depth information to alleviate this problem, but the performance of their methods is sensitive to the accuracy of depth estimation. To this end, we propose an efficient method to enhance the performance of 3DGS with sparse training views. Specifically, instead of applying depth maps for regularization, we propose a densification method that generates high-quality point clouds for improved initialization of 3D Gaussians. Furthermore, we propose Systematically Angle of View Sampling (SAOVS), which employs Spherical Linear Interpolation (SLERP) and linear interpolation for side view sampling, to determine unseen views outside the training data for semantic pseudo-label regularization. Experiments show that our proposed method significantly outperforms other promising 3D rendering models on the ScanNet dataset and the LLFF dataset. In particular, compared with the conventional 3DGS method, the PSNR and SSIM performance gains achieved by our method are up to 1.71dB and 0.07, respectively. In addition, the novel view synthesis obtained by our method demonstrates the highest visual quality with fewer distortions. | Point Cloud Densification for 3D Gaussian Splatting from Sparse Input Views | [
"Kin-Chung Chan",
"Jun Xiao",
"Hana Lebeta Goshu",
"Kin-man Lam"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=vKH5hlAQs9 | @inproceedings{
huang2024deep,
title={Deep Instruction Tuning for Segment Anything Model},
author={Xiaorui Huang and Gen Luo and Chaoyang Zhu and Bo Tong and Yiyi Zhou and Xiaoshuai Sun and Rongrong Ji},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=vKH5hlAQs9}
} | Recently, Segment Anything Model (SAM) has become a research hotspot in the fields of multimedia and computer vision, which exhibits powerful yet versatile capabilities on various (un) conditional image segmentation tasks. Although SAM can support different types of segmentation prompts, we note that, compared to point- and box-guided segmentations, it performs much worse on text-instructed tasks, e.g., referring image segmentation (RIS). In this paper, we argue that deep text instruction tuning is key to mitigate such shortcoming caused by the shallow fusion scheme in its default light-weight mask decoder. To address this issue, we propose two simple yet effective deep instruction tuning (DIT) methods for SAM, one is end-to-end and the other is layer-wise. With minimal modifications, DITs can directly transform the image encoder of SAM as a stand-alone vision-language learner in contrast to building another deep fusion branch, maximizing the benefit of its superior segmentation capability. Extensive experiments on three highly competitive benchmark datasets of RIS show that a simple end-to-end DIT can improve SAM by a large margin, while the layer-wise DIT can further boost the performance to state-of-the-art with much less data and training expenditures. Our code is released at: https://github.com/wysnzzzz/DIT. | Deep Instruction Tuning for Segment Anything Model | [
"Xiaorui Huang",
"Gen Luo",
"Chaoyang Zhu",
"Bo Tong",
"Yiyi Zhou",
"Xiaoshuai Sun",
"Rongrong Ji"
] | Conference | poster | 2404.00650 | [
"https://github.com/wysnzzzz/dit"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=vKGqzxqNM9 | @inproceedings{
wu2024coarsetofine,
title={Coarse-to-Fine Proposal Refinement Framework For Audio Temporal Forgery Detection and Localization},
author={Junyan Wu and Wei Lu and Xiangyang Luo and Rui Yang and Qian Wang and Xiaochun Cao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=vKGqzxqNM9}
} | Recently, a novel form of audio partial forgery has posed challenges to its forensics, requiring advanced countermeasures to detect subtle forgery manipulations within long-duration audio. However, existing countermeasures still serve a classification purpose and fail to perform meaningful analysis of the start and end timestamps of partial forgery segments. To address this challenge, we introduce a novel coarse-to-fine proposal refinement framework (CFPRF) that incorporates a frame-level detection network (FDN) and a proposal refinement network (PRN) for audio temporal forgery detection and localization. Specifically, the FDN aims to mine informative inconsistency cues between real and fake frames to obtain discriminative features that are beneficial for roughly indicating forgery regions. The PRN is responsible for predicting confidence scores and regression offsets to refine the coarse-grained proposals derived from the FDN. To learn robust discriminative features, we devise a difference-aware feature learning (DAFL) module guided by contrastive representation learning to enlarge the sensitive differences between different frames induced by minor manipulations. We further design a boundary-aware feature enhancement (BAFE) module to capture the contextual information of multiple transition boundaries and guide the interaction between boundary information and temporal features via a cross-attention mechanism. Extensive experiments show that our CFPRF achieves state-of-the-art performance on various datasets, including LAV-DF, ASVS2019PS, and HAD. | Coarse-to-Fine Proposal Refinement Framework For Audio Temporal Forgery Detection and Localization | [
"Junyan Wu",
"Wei Lu",
"Xiangyang Luo",
"Rui Yang",
"Qian Wang",
"Xiaochun Cao"
] | Conference | oral | 2407.16554 | [
"https://github.com/itzjuny/cfprf"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=vJbyT9bYgf | @inproceedings{
wu2024qsnerv,
title={{QS}-Ne{RV}: Real-Time Quality-Scalable Decoding with Neural Representation for Videos},
author={Chang Wu and Guancheng Quan and Gang He and Xin-Quan Lai and Yunsong Li and Wenxin Yu and Xianmeng Lin and Cheng Yang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=vJbyT9bYgf}
} | In this paper, we propose a neural representation for videos that enables real-time quality-scalable decoding, called QS-NeRV. QS-NeRV comprises a Self-Learning Distribution Mapping Network (SDMN) and Extensible Enhancement Networks (EENs). Firstly, SDMN functions as the base layer (BL) for scalable video coding, focusing on encoding videos of lower quality. Within SDMN, we employ a methodology that minimizes the bitstream overhead to achieve efficient information exchange between the encoder and decoder instead of direct transmission. Specifically, we utilize an invertible network to map the multi-scale information obtained from the encoder to a specific distribution. Subsequently, during the decoding process, this information is recovered from a randomly sampled latent variable to assist the decoder in achieving improved reconstruction performance. Secondly, EENs serve as the enhancement layers (ELs) and are trained in an overfitting manner to obtain robust restoration capability. By integrating the fixed BL bitstream with the parameters of EEN as an extension pack, the decoder can produce higher-quality enhanced videos. Furthermore, the scalability of the method allows for adjusting the number of combined packs to accommodate diverse quality requirements. Experimental results demonstrate our proposed QS-NeRV outperforms the state-of-the-art real-time decoding INR-based methods on various datasets for video compression and interpolation tasks. | QS-NeRV: Real-Time Quality-Scalable Decoding with Neural Representation for Videos | [
"Chang Wu",
"Guancheng Quan",
"Gang He",
"Xin-Quan Lai",
"Yunsong Li",
"Wenxin Yu",
"Xianmeng Lin",
"Cheng Yang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=v5qqd214xq | @inproceedings{
wang2024cieasrcontextual,
title={{CIEASR}:Contextual Image-Enhanced Automatic Speech Recognition for Improved Homophone Discrimination},
author={Ziyi Wang and Yiming Rong and Deyang Jiang and Haoran Wu and Shiyu Zhou and Bo XU},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=v5qqd214xq}
} | Automatic Speech Recognition (ASR) models pre-trained on large-scale speech datasets have achieved significant breakthroughs compared with traditional methods.
However, mainstream pre-trained ASR models encounter challenges in distinguishing homophones, which have close or identical pronunciations.
Previous studies have introduced visual auxiliary cues to address this challenge, yet the sophisticated use of lip movements falls short in correcting homophone errors.
On the other hand, the fusion and utilization of scene images remain in an exploratory stage, with performance still inferior to the pre-trained speech model.
In this paper, we introduce Contextual Image-Enhanced Automatic Speech Recognition (CIEASR), a novel multimodal speech recognition model that incorporates a new cue fusion method, using scene images as soft prompts to correct homophone errors.
To mitigate data scarcity, we refine and expand the VSDial dataset for extensive experiments, illustrating that scene images contribute to the accurate recognition of entity nouns and personal pronouns.
Our proposed CIEASR achieves state-of-the-art results on VSDial and Flickr8K, significantly reducing the Character Error Rate (CER) on VSDial from 3.61\% to 0.92\%. | CIEASR:Contextual Image-Enhanced Automatic Speech Recognition for Improved Homophone Discrimination | [
"Ziyi Wang",
"Yiming Rong",
"Deyang Jiang",
"Haoran Wu",
"Shiyu Zhou",
"Bo XU"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=v4zZZO0GCg | @inproceedings{
ju2024ecfcon,
title={{ECFCON}: Emotion Consequence Forecasting in Conversations},
author={Xincheng Ju and Dong Zhang and Suyang Zhu and Junhui Li and Shoushan Li and Guodong Zhou},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=v4zZZO0GCg}
} | Conversation is a common form of human communication that includes extensive emotional interaction.
Traditional approaches focused on studying emotions and their underlying causes in conversations.
They try to address two issues: what emotions are present in the dialogue and what causes these emotions.
However, these works often overlook the bidirectional nature of emotional interaction in dialogue:
utterances can evoke emotionscause, and emotions can also lead to certain utterances consequence.
Therefore, we propose a new issue: what consequences arise from these emotions?
This leads to the introduction of a new task called Emotion Consequence Forecasting in CONversations (ECFCON).
In this work, we first propose a corresponding dialogue-level dataset.
Specifically, we select 2,780 video dialogues for annotation, totaling 39,950 utterances. Out of these, 12,391 utterances contain emotions, and 8,810 of these have discernible consequences.
Then, we benchmark this task by conducting experiments from the perspectives of traditional methods,
generalized LLMs prompting methods, and clue-driven hybrid methods. Both our dataset and benchmark codes are openly accessible to the public. | ECFCON: Emotion Consequence Forecasting in Conversations | [
"Xincheng Ju",
"Dong Zhang",
"Suyang Zhu",
"Junhui Li",
"Shoushan Li",
"Guodong Zhou"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=v4RnEVediE | @inproceedings{
chakrabarty2024lomoe,
title={Lo{MOE}: Localized Multi-Object Editing via Multi-Diffusion},
author={Goirik Chakrabarty and Aditya Chandrasekar and Ramya Hebbalaguppe and Prathosh AP},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=v4RnEVediE}
} | Recent developments in diffusion models have demonstrated an exceptional capacity to generate high-quality prompt-conditioned image edits. Nevertheless, previous approaches have primarily relied on textual prompts for image editing, which tend to be less effective when making precise edits to specific objects or fine-grained regions within a scene containing single/multiple objects. We introduce a novel framework for zero-shot localized multi-object editing through a multi-diffusion process to overcome this challenge. This framework empowers users to perform various operations on objects within an image, such as adding, replacing, or editing $\textbf{many}$ objects in a complex scene $\textbf{in one pass}$. Our approach leverages foreground masks and corresponding simple text prompts that exert localized influences on the target regions resulting in high-fidelity image editing. A combination of cross-attention and background preservation losses within the latent space ensures that the characteristics of the object being edited are preserved while simultaneously achieving a high-quality, seamless reconstruction of the background with fewer artifacts compared to the state-of-the-art (SOTA). We also curate and release a dataset dedicated to multi-object editing, named $\texttt{LoMOE}$-Bench. Our experiments against existing SOTA demonstrate the improved effectiveness of our approach in terms of both image editing quality, and inference speed. | LoMOE: Localized Multi-Object Editing via Multi-Diffusion | [
"Goirik Chakrabarty",
"Aditya Chandrasekar",
"Ramya Hebbalaguppe",
"Prathosh AP"
] | Conference | poster | 2403.00437 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=uzlrRAmhPj | @inproceedings{
han2024shapeguided,
title={Shape-Guided Clothing Warping for Virtual Try-On},
author={Xiaoyu Han and Shunyuan Zheng and Zonglin Li and Chenyang Wang and Xin Sun and Quanling Meng},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=uzlrRAmhPj}
} | Image-based virtual try-on aims to seamlessly fit in-shop clothing to a person image while maintaining pose consistency. Existing methods commonly employ the thin plate spline (TPS) transformation or appearance flow to deform in-shop clothing for aligning with the person's body. Despite their promising performance, these methods often lack precise control over fine details, leading to inconsistencies in shape between clothing and the person's body as well as distortions in exposed limb regions. To tackle these challenges, we propose a novel shape-guided clothing warping method for virtual try-on, dubbed SCW-VTON, which incorporates global shape constraints and additional limb textures to enhance the realism and consistency of the warped clothing and try-on results. To integrate global shape constraints for clothing warping, we devise a dual-path clothing warping module comprising a shape path and a flow path. The former path captures the clothing shape aligned with the person's body, while the latter path leverages the mapping between the pre- and post-deformation of the clothing shape to guide the estimation of appearance flow. Furthermore, to alleviate distortions in limb regions of try-on results, we integrate detailed limb guidance by developing a limb reconstruction network based on masked image modeling. Through the utilization of SCW-VTON, we are able to generate try-on results with enhanced clothing shape consistency and precise control over details. Extensive experiments demonstrate the superiority of our approach over state-of-the-art methods both qualitatively and quantitatively. | Shape-Guided Clothing Warping for Virtual Try-On | [
"Xiaoyu Han",
"Shunyuan Zheng",
"Zonglin Li",
"Chenyang Wang",
"Xin Sun",
"Quanling Meng"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=uxxdE9HFGI | @inproceedings{
zhang2024cream,
title={{CREAM}: Coarse-to-Fine Retrieval and Multi-modal Efficient Tuning for Document {VQA}},
author={Jinxu Zhang and Yongqi Yu and Yu Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=uxxdE9HFGI}
} | Document Visual Question Answering (DVQA) involves responding to queries based on the contents of document images. Existing works are confined to locating information within a single page and lack support for cross-page question-and-answer interactions. Furthermore, the token length limitation on model inputs can lead to the truncation of answer-relevant segments. In this study, we present CREAM, an innovative methodology that focuses on high-performance retrieval and integrates relevant multimodal document information to effectively address this critical issue. To overcome the limitations of current text embedding similarity methods, we first employ a coarse-to-fine retrieval and ranking approach. The coarse phase calculates the similarity between the query and text chunk embeddings, while the fine phase involves multiple rounds of grouping and ordering with a large language model to identify the text chunks most relevant to the query. Subsequently, integrating an attention pooling mechanism for multi-page document images into the vision encoder allows us to effectively merge the visual information of multi-page documents, enabling the multimodal large language model(MLLM) to simultaneously process both single-page and multi-page documents. Finally, we apply various parameter-efficient tuning methods to enhance document visual question-answering performance. Experiments demonstrate that our approach secures state-of-the-art results across various document datasets. | CREAM: Coarse-to-Fine Retrieval and Multi-modal Efficient Tuning for Document VQA | [
"Jinxu Zhang",
"Yongqi Yu",
"Yu Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=uvjZktmxMn | @inproceedings{
wang2024terf,
title={Te{RF}: Text-driven and Region-aware Flexible Visible and Infrared Image Fusion},
author={Hebaixu Wang and Hao Zhang and Xunpeng Yi and Xinyu Xiang and Leyuan Fang and Jiayi Ma},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=uvjZktmxMn}
} | The fusion of visible and infrared images aims to produce high-quality fusion images with rich textures and salient target information. Existing methods lack interactivity and flexibility in the execution of fusion. It is unfeasible to express the requirements to modify the fusion effect, and the different regions in the source images are treated equally across the identical fusion model, which causes the fusion homogenization and low distinction. Besides, their pre-defined fusion strategies invariably lead to monotonous effects, which are insufficiently comprehensive. They fail to adequately consider data credibility, scene illumination, and noise degradation inherent in the source information. To address these issues, we propose the Text-driven and Region-aware Flexible visible and infrared image fusion, termed as TeRF. On the one hand, we propose a flexible image fusion framework with multiple large language and vision models, which facilitates the visual-text interaction. On the other hand, we aggregate comprehensive fine-tuning paradigms for the different fusion requirements to build a unified fine-tuning pipeline. It allows the linguistic selection of the regions and effects, yielding visually appealing fusion outcomes. Extensive experiments demonstrate the competitiveness of our method both qualitatively and quantitatively compared to existing state-of-the-art methods. | TeRF: Text-driven and Region-aware Flexible Visible and Infrared Image Fusion | [
"Hebaixu Wang",
"Hao Zhang",
"Xunpeng Yi",
"Xinyu Xiang",
"Leyuan Fang",
"Jiayi Ma"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=uu3NzhDeMa | @inproceedings{
liu2024scaletraversal,
title={ScaleTraversal: Creating Multi-Scale Biomedical Animation with Limited Hardware Resources},
author={Richen Liu and Hansheng Wang and Hailong Wang and Siru Chen and Chufan Lai and Ayush Kumar and Siming Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=uu3NzhDeMa}
} | We design ScaleTraversal, an interactive tool for creating multi-scale 3D demonstration animations with limited resources for users who are unavailable to access high performance machines such as clusters or super computers. It is challenging to create 3D demonstration animations for multi-modal and multi-scale data. First, it is challenging to strike a balance between flexibility and user friendliness to design the user interface in customizing demonstration animations. Second, the multi-scale biomedical data is often characterized as large-size so that a commonly-used desktop PC is hard to handle. We design an interactive bi-functional user interface to create multi-scale biomedical demonstration animations intuitively. It fully utilizes the strengths of graphical interface’s user friendliness and textual interface’s flexibility, which enables users to customize demonstration animations from macro-scales to meso- and micro-scales. Furthermore, we design four scale-based memory management strategies to solve the challenging issues presented in multi-scale data. They are streaming data processing strategy, scale level data prefetching strategy, memory utilization strategy, and GPU acceleration strategy for rendering. Finally, we conduct both quantitative evaluation and qualitative evaluation to demonstrate the efficiency, expressiveness and usability of ScaleTraversal. | ScaleTraversal: Creating Multi-Scale Biomedical Animation with Limited Hardware Resources | [
"Richen Liu",
"Hansheng Wang",
"Hailong Wang",
"Siru Chen",
"Chufan Lai",
"Ayush Kumar",
"Siming Chen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=up4C6pO1Vw | @inproceedings{
zhang2024synergetic,
title={Synergetic Prototype Learning Network for Unbiased Scene Graph Generation},
author={Ruonan Zhang and Ziwei Shang and Fengjuan Wang and Zhaoqilin Yang and Shan Cao and Yigang Cen and Gaoyun An},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=up4C6pO1Vw}
} | Scene Graph Generation (SGG) is an important cross-modal task in scene understanding, aiming to detect visual relations in an image. However, due to the various appearance features, the feature distributions of different categories have suffered from a severe overlap, which makes the decision boundaries ambiguous. The current SGG methods mainly attempt to re-balance the data distribution, which is dataset-dependent and limits the generalization. To solve this problem, a Synergetic Prototype Learning Network (SPLN) is proposed here, where the generalized semantic space is modeled and the synergetic effect among different semantic subspaces is delved into.
In SPLN, a Collaboration-induced Prototype Learning method is proposed to model the interaction of visual semantics and structural semantics. The conventional visual semantics is focused on with a residual-driven representation enhancement module to capture details. And the intersection of structural semantics and visual semantics is explicitly modeled as conceptual semantics, which has been ignored by existing methods. Meanwhile, to alleviate the noise of unrelated and meaningless words, an Intersection-induced Prototype Learning method is also proposed specially for conceptual semantics with an essence-driven prototype enhancement module. Moreover, a Selective Fusion Module is proposed to synergetically integrate the results of visual, structural, conceptual branches and the generalized semantics projection. Experiments on VG and GQA datasets show that our method achieves state-of-the-art performance on the unbiased metrics, and ablation studies validate the effectiveness of each component. | Synergetic Prototype Learning Network for Unbiased Scene Graph Generation | [
"Ruonan Zhang",
"Ziwei Shang",
"Fengjuan Wang",
"Zhaoqilin Yang",
"Shan Cao",
"Yigang Cen",
"Gaoyun An"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=un1HNXcIyW | @inproceedings{
wu2024spatiotemporal,
title={Spatio-temporal Heterogeneous Federated Learning for Time Series Classification with Multi-view Orthogonal Training},
author={Chenrui Wu and Haishuai Wang and Xiang Zhang and Zhen Fang and Jiajun Bu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=un1HNXcIyW}
} | Federated learning (FL) is undergoing significant traction due to its ability to perform privacy-preserving training on decentralized data. In this work, we focus on sensitive time series data collected by distributed sensors in real-world applications. However, time series data introduce the challenge of dual spatial-temporal feature skew due to their dynamic changes across domains and time, differing from computer vision. This key challenge includes inter-client spatial feature skew caused by heterogeneous sensor collection and intra-client temporal feature skew caused by dynamics in time series distribution. We follow the framework of Personalized Federated Learning (pFL) to handle dual feature drifts to enhance the capabilities of customized local models. Therefore, in this paper, we propose a method FedST to solve key challenges through orthogonal feature decoupling and regularization in both training and testing stages. During training, we collaborate time view and frequency view of time series data to enrich the mutual information and adopt orthogonal projection to disentangle and align the shared and personalized features between views, and between clients. During testing, we apply prototype-based predictions and model-based predictions to achieve model consistency based on shared features. Extensive experiments on multiple real-world classification datasets and multimodal time series datasets show our method consistently outperforms state-of-the-art baselines with clear advantages. | Spatio-temporal Heterogeneous Federated Learning for Time Series Classification with Multi-view Orthogonal Training | [
"Chenrui Wu",
"Haishuai Wang",
"Xiang Zhang",
"Zhen Fang",
"Jiajun Bu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=umzXkdsqyN | @inproceedings{
zhu2024combating,
title={Combating Visual Question Answering Hallucinations via Robust Multi-Space Co-Debias Learning},
author={Jiawei Zhu and Yishu Liu and Huanjia Zhu and Hui Lin and Yuncheng Jiang and Zheng Zhang and Bingzhi Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=umzXkdsqyN}
} | The challenge of bias in visual question answering (VQA) has gained considerable attention in contemporary research. Various intricate bias dependencies, such as modalities and data imbalances, can cause semantic ambiguities to generate shifts in the feature space of VQA instances. This phenomenon is referred to as VQA Hallucinations. Such distortions can cause hallucination distributions that deviate significantly from the true data, resulting in the model producing factually incorrect predictions. To address this challenge, we propose a robust Multi-Space Co-debias Learning (MSCD) approach for combating VQA hallucinations, which effectively mitigates bias-induced instance and distribution shifts in multi-space under a unified paradigm. Specifically, we design bias-aware and prior-aware debias constraints by utilizing the angle and angle margin of the spherical space to construct bias-prior-instance constraints, thereby refining the manifold representation of instance de-bias and distribution de-dependence. Moreover, we leverage the inherent overfitting characteristics of Euclidean space to introduce bias components from biased examples and modal counterexample injection, further assisting in multi-space robust learning. By integrating homeomorphic instances in different spaces, MSCD could enhance the comprehension of structural relationships between semantics and answer classes, yielding robust representations that are not solely reliant on training priors. In this way, our co-debias paradigm generates more robust representations that effectively mitigate biases to combat hallucinations. Extensive experiments on multiple benchmark datasets consistently demonstrate that the proposed MSCD method outperforms state-of-the-art baselines. | Combating Visual Question Answering Hallucinations via Robust Multi-Space Co-Debias Learning | [
"Jiawei Zhu",
"Yishu Liu",
"Huanjia Zhu",
"Hui Lin",
"Yuncheng Jiang",
"Zheng Zhang",
"Bingzhi Chen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=ugDKVMW62p | @inproceedings{
cao2024see,
title={See or Guess: Counterfactually Regularized Image Captioning},
author={Qian Cao and Xu Chen and Ruihua Song and Xiting Wang and Xinting Huang and Yuchen Ren},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=ugDKVMW62p}
} | Image captioning, which generates natural language descriptions of the visual information in an image, is a crucial task in vision-language research. Previous models have typically addressed this task by aligning the generative capabilities of machines with human intelligence through statistical fitting of existing datasets. While these models demonstrate proficiency in describing the content of normal images, they may struggle to accurately describe those where certain parts of the image are obscured or edited. Conversely, humans effortlessly excel at it in this case. The weaknesses these models exhibit, including hallucinations and limited interpretability, often result in performance declines when applied to scenarios involving shifted association patterns. In this paper, we present a generic image captioning framework that leverages causal inference to make existing models more capable of interventional tasks, and counterfactually explainable. Specifically, our approach consists of two variants that utilize either total effect or natural direct effect. We incorporate these concepts into the training process, enabling the models to handle counterfactual scenarios and thereby become more generalizable. Extensive experiments on various datasets have demonstrated that our method can effectively reduce hallucinations and increase the model's faithfulness to the images, with a high portability for both small-scale and large-scale image-to-text models. | See or Guess: Counterfactually Regularized Image Captioning | [
"Qian Cao",
"Xu Chen",
"Ruihua Song",
"Xiting Wang",
"Xinting Huang",
"Yuchen Ren"
] | Conference | poster | 2408.16809 | [
"https://github.com/aman-4-real/see-or-guess"
] | https://huggingface.co/papers/2408.16809 | 1 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=ufmOvDcKRI | @inproceedings{
chen2024xmecap,
title={{XM}eCap: Meme Caption Generation with Sub-Image Adaptability},
author={Yuyan Chen and Songzhou Yan and Zhihong Zhu and Zhixu Li and Yanghua Xiao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=ufmOvDcKRI}
} | Humor, deeply rooted in societal meanings and cultural details, poses a unique challenge for machines. While advances have been made in natural language processing, real-world humor often thrives in a multi-modal context, encapsulated distinctively by memes. This paper poses a particular emphasis on the impact of multi-images on meme captioning. After that, we introduce the \textsc{XMeCap} framework, a novel approach that adopts supervised fine-tuning and reinforcement learning based on an innovative reward model, which factors in both global and local similarities between visuals and text. Our results, benchmarked against contemporary models, manifest a marked improvement in caption generation for both single-image and multi-image memes, as well as different meme categories. \textsc{XMeCap} achieves an average evaluation score of 75.85 for single-image memes and 66.32 for multi-image memes, outperforming the best baseline by 3.71\% and 4.82\%, respectively. This research not only establishes a new frontier in meme-related studies but also underscores the potential of machines in understanding and generating humor in a multi-modal setting. | XMeCap: Meme Caption Generation with Sub-Image Adaptability | [
"Yuyan Chen",
"Songzhou Yan",
"Zhihong Zhu",
"Zhixu Li",
"Yanghua Xiao"
] | Conference | poster | 2407.17152 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |