id
stringlengths
10
10
title
stringlengths
3
179
track
stringclasses
1 value
status
stringclasses
3 values
keywords
stringlengths
2
2.39k
primary_area
stringclasses
21 values
author
stringclasses
501 values
authorids
stringclasses
501 values
aff
stringclasses
1 value
aff_domain
stringclasses
1 value
position
stringclasses
1 value
rating
stringclasses
355 values
confidence
stringlengths
0
19
soundness
stringclasses
642 values
contribution
stringclasses
596 values
presentation
stringclasses
782 values
rating_avg
float64
0
9
confidence_avg
float64
0
5
soundness_avg
float64
0
4
contribution_avg
float64
0
4
presentation_avg
float64
0
4
corr_rating_confidence
float64
-1
1
project
stringclasses
1 value
github
stringclasses
1 value
Review
listlengths
2
10
uxDFlPGRLX
FlowDec: A flow-based full-band general audio codec with high perceptual quality
main
Active
audio;audio codec;generative models;flow matching;postfilter;signal enhancement
applications to computer vision, audio, language, and other modalities
5;6;6;8
4;3;4;5
2;3;3;3
2;3;3;3
2;4;4;4
6.25
4
2.75
2.75
3.5
0.648886
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "The NeurIPS 2023 oral by Kingma and Gao unifies under a common framework the diffusion variants of, e.g., flow matching and optimal transport. This could be useful, given that both are discussed in your work. You might also take a look at their Appendix J, which has some discussion of the frequency analysis of diffusion noise that may have some overlap with your proposed frequency-dependent noise.\n\nCan we hear some audio examples? It is difficult to judge the results otherwise, and the MUSHRA test (unlike an AB or ABX test) doesn't provide sufficient granularity to discern statistical significance between, e.g., FlowDec and DAC." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Figures 2 and 3 are great visualizations for the diffusion dynamics and proposed improvements.\n\nGood selection of baselines, including retraining baseline models to account for differences in parameter count.\n\nThe authors mention that they will be open-sourcing their code." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes FlowDec, a 48 kHz general audio codec with a flow-matching diffusion post-filter. FlowDec modifies the DAC audio codec with different loss functions, the stochastic post-filter, and frequency-dependent noise. Evaluations demonstrate strong improvements over prior post-filter and diffusion-based methods, but only small improvements over DAC." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "You should consolidate the six main contributions. I recommend at most three or four. Some of the contributions currently listed seem minor, or are a side effect of another contribution.\n\nA primary issue with the proposed system is RTF. The logical alternative (DAC) is still orders of magnitude faster. I do still think this is an important paper in helping close the gap.\n\nA minor note, but a 44.1 kHz sample rate is sufficient to prevent audible aliasing above 20 kHz. The other rationales given for using 48 kHz over 44.1 kHz are convincing enough for me." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Could the authors clarify the main purpose of FlowDec? Why is it presented as a standalone codec rather than positioning the postfilter as an enhancement tool?\n2. What is the rationale for choosing to enhance only outputs from a non-adversarial codec instead of using GAN output as the initial estimate for FlowDec's postfilter? Could the authors specify what benefits NDAC offers over DAC in this proposed two-stage setup?\n3. The STFT settings seem unconventional. A 1534-point FFT would definitely be slower than a 1536-point FFT due to FFT efficiency with highly composite sizes. I assume the goal is to get exactly 768 frequency bins, but why not just use a 1536-point FFT with 769 bins? Overall, this may be negligible given the slower inference, but I'm curious about the tradeoff here." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well-written, with clear explanations and informative illustrations that help clarify key points.\n- It provides useful context relative to previous score-based models. It also includes valuable theoretical insights and formalism to support its approach.\n- It performs on par with state-of-the-art GAN codecs and shows slight improvements for speech signals. FlowDec may handle high-frequency harmonics better, where GAN-based codecs often introduce periodicity and harmonic artifacts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces FlowDec, a neural audio codec with a two-stage approach: (1) an autoencoder with residual vector quantization, trained without adversarial loss, and (2) a postfilter that reduces coding artifacts and enhances perceptual quality. FlowDec adapts conditional flow matching for signal enhancement, achieving improvements over previous score- and flow-based models. Both listening tests and objective metrics show that FlowDec provides perceptual quality competitive to state-of-the-art GAN-based codecs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper introduces FlowDec as a codec, but its primary focus seems to be a flow-based postfilter that enhances outputs from an *underlying* codec. The main weakness of this paper is the decision not to explore a GAN-based codec combined with the proposed postfilter. The paper argues against adversarial training in Section 3.3, stating that the “generated phase may be very different from the original phase.” However, FlowDec itself doesn't seem to preserve phase. Results presented in Section 5.1 show that FlowDec performs worse than GAN-based codecs in terms of SI-SDR and fwSSNR, which are sensitive to phase shifts. This might still be acceptable for a low-bitrate codec, particularly if perceptual quality is prioritized. However, the paper should more clearly justify why a GAN isn't used as the underlying codec. A GAN-based codec with a flow-based postfilter might not only reduce coding artifacts but also achieve higher reconstruction metrics. This would call into question the rationale behind the proposed non-adversarial DAC (NDAC) model. Overall, the distinction between FlowDec as a novel audio codec and its role as an enhancement tool for *any* audio codec could be clarified." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "There have been several concurrent works adopting flow matching with similar task (i.e. mel spectrogram vocoder and discrete codec decoder), such as RFWave [1] and PeriodWave [2,3]. These work directly utilize flow matching as the decoder rather than postfilter, with several configs being order of magnitude faster than this work.\n\nWhile the direct comparison may not be strictly necessary, can the authors provide conceptual comparisons between the aforementioned methods along with possible advantages of FlowDec?\n\n[1] Liu, Peng, and Dongyang Dai. \"RFWave: Multi-band Rectified Flow for Audio Waveform Reconstruction.\" arXiv preprint arXiv:2403.05010 (2024).\n\n[2] Lee, Sang-Hoon, Ha-Yeong Choi, and Seong-Whan Lee. \"PeriodWave: Multi-Period Flow Matching for High-Fidelity Waveform Generation.\" arXiv preprint arXiv:2408.07547 (2024).\n\n[3] Lee, Sang-Hoon, Ha-Yeong Choi, and Seong-Whan Lee. \"Accelerating High-Fidelity Waveform Generation via Adversarial Flow Matching Optimization.\" arXiv preprint arXiv:2408.08019 (2024)." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* Contrary to adversarial training (a dominant approach in vocoder/codec) which requires domain specific expertise to stabilize training and tune the hyperparameters of multiple losses, FlowDec simplifies the trianing pipeline by elimiating the adversarial losses. In my opinion, this can be considered as one of the first work that achieves competitive quality to GAN-based models with RTF < 1. \n\n* The proposed design choice is well justified overall, both in the theoretical and empericial perspective. It provides enough rigor in experimental detail, including toy experiments and detailed ablation study of each component." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "FlowDec is an improved version of ScoreDec (uses a score-based generative model as a postfilter), by switching the objective to flow matching. It further proposes a joint flow matching objective tailored for the postfiltering task (e.g. mean-shifted noise with frequency-dependent diagonal covariance). This makes it faster than real time (unlike ScoreDec) as a practically viable, full-band, and universal audio codec without adversarial training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* While the evaluation results from the paper look convincing overall, no demo page has been provided at submission. It's hard to form the reviewer's opinion on the subjective quality without access to the demo. I hope the authors can provide the samples for the reader to evaluate the subjective quality themselves. \n\n* Although it improved the RTF by a large margin, it still requires multiple NFE from the postfilter resulting in RTF of 0.22-0.23 which is considerably slower than recent, feed-forward audio codecs." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The question arises as to why the waveform is not reconstructed directly from discrete tokens using flow matching. To provide a more comprehensive evaluation, could you elaborate on the potential computational or quality trade-offs between direct reconstruction via flow matching and the method proposed in the paper?\n- Regarding line 168-171, the presence of $p_{data}(\\cdot|y)$ is not clear as it seemingly should just be $p_{data}(x)$. This needs further clarification and justification.\n- From line 220 to 223 and in Figure 2, it may be inappropriate as we cannot obtain $x_1$ for real postfiltering problems. This aspect requires reconsideration or better explanation.\n- In Figure 3, the use of a large $\\sigma_t$ of 0.1 o illustrate that FlowAVSE is non-contractive is not convincing, especially since the original FlowAVSE employs a value of 0.04. It would be beneficial for the authors to provide a clearer explanation for this choice or consider revising the value used. \n- As $y$ is a time-domain signal, it is unclear how to apply frequency-dependent $\\sigma_y$ since it cannot be directly applied. The authors need to provide more details on this aspect." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Possibly the first application of flow matching in postfiltering.\n- Beautiful experiment result plots and strong objective and subjective results.\n- Enhance DAC by incorporating a multi-scale Constant-Q Transform (CQT) loss." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce FlowDEC, a flow matching-based post-filtering method designed to enhance the audio quality decoded from discrete audio tokens. This method has demonstrated strong objective and subjective results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The latency of the proposed method is at least an order of magnitude higher than that of DAC. To facilitate a comprehensive evaluation, it would be beneficial to receive specific latency measurements for both the proposed method and the DAC. Furthermore, identifying the applications or use cases that could be critically impacted by this higher latency would provide valuable context. Moreover, to support the validation of the method’s performance, it would be advantageous if the authors could provide an online demo that allows for direct comparison of the latency between the two methods.\n- The baselines are limited. There are other diffusion-based works that reconstruct audio waveforms from discrete tokens, e.g. Multi-band diffusion [1]. \n- The concept of selecting a data-dependent prior, as proposed, is not entirely novel, with works such as Priorgrad [2] having implemented a similar strategy. To better understand the relationship between the proposed method and existing approaches, it would be helpful to have a clearer comparison of their similarities and differences. \n- Multi-band diffusion [1] uses frequency-dependent noise levels, which is a notable feature that could be further discussed in comparison to the proposed method.\n\n[1] \"From Discrete Tokens to High-Fidelity Audio Using Multi-Band Diffusion\" by Robin San Roman, Yossi Adi, Antoine Deleforge, Romain Serizel, Gabriel Synnaeve, and Alexandre Défossez.\n\n[2] \"PriorGrad: Improving Conditional Denoising Diffusion Models with Data-Dependent Adaptive Prior\" by Sang-gil Lee, Heeseung Kim, Chaehun Shin, Xu Tan, Chang Liu, Qi Meng, Tao Qin, Wei Chen, Sungroh Yoon, Tie-Yan Liu" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "FlowDec is a flow-based postfilter codec for general audio without adversarial training, and a competitive alternative to current GAN-based SOTA codecs." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024flowdec,\ntitle={FlowDec: A flow-based full-band general audio codec with high perceptual quality},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=uxDFlPGRLX},\nnote={under review}\n}" }, "abstract": { "value": "We propose FlowDec, a neural full-band audio codec for general audio sampled at 48 kHz that combines non-adversarial codec training with a stochastic postfilter based on a novel conditional flow matching method. Compared to the prior work ScoreDec which is based on score matching, we generalize from speech to general audio and move from 24 kbit/s to as low as 4 kbit/s, while improving output quality and reducing the required postfilter DNN evaluations from 60 to 6 without any fine-tuning or distillation techniques. We provide theoretical insights and geometric intuitions for our approach in comparison to ScoreDec as well as another recent work that uses flow matching, and conduct ablation studies on our proposed components. We show that FlowDec is a competitive alternative to the recent GAN-dominated stream of neural codecs, achieving FAD scores better than those of the established GAN-based codec DAC and listening test scores that are on par, and producing qualitatively more natural reconstructions for speech and harmonic structures in music." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "audio", "audio codec", "generative models", "flow matching", "postfilter", "signal enhancement" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c10512ecac00cfcd8f490464b3ebab006bfce3db.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "FlowDec: A flow-based full-band general audio codec with high perceptual quality" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
uxVBbSlKQ4
Flow Matching with Gaussian Process Priors for Probabilistic Time Series Forecasting
main
Active
flow matching;time series forecasting;generative modeling;deep learning
learning on time series and dynamical systems
5;5;6;6
3;2;4;2
3;3;4;3
3;2;2;3
2;2;4;3
5.5
2.75
3.25
2.5
2.75
0.301511
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "•\tThe problem only considers the univariate case. Can the model extend to the multivariate time series problem?\n\n•\tI want some more explanation about the effectiveness of informed prior distributions. Why does closedness of the prior and data distribution imply easy learning. Do you have any experiments about train efficiency or path efficiency? \n\n•\tCan the given prior (Gaussian process) extend to the arbitrary prior ? for example, refer to [1].\n\n•\tDo you have any theoretical evidence about how the selection of kernel functions effect to the model performance? (ex. OU kernel is better when the data follows OU process)\n\n[1] Leveraging Priors via Diffusion Bridge for Time Series Generation, Arxiv24" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "• By utilizing Gaussian process to the conditional flow matching, the model reflects the temporal dependencies of the given time series data better.\n\n• The model enables both unconditional and conditional generations.\n\n• By conditional prior sampling, the unconditionally trained model could follow the given guidance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The existing diffusion models have problems in the time series generation since the data and prior distributions differ. The authors handle this problem by utilizing conditional flow matching framework. They propose TSFlow, which sets the prior distribution as Gaussian process to make the prior distribution close to the data distribution. Also, they propose conditional prior sampling which makes an unconditionally trained model possible for probabilistic forecasting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Please refer to the Questions section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- The exact problem statement was a bit unclear to me. In lines 154-156 the authors describe a time series as a vector in $\\mathbb{R}^L$. Does this mean that the authors only work on time series having a fixed length $L$, or does the method allow for variable-length time series? Similarly, are the authors assuming that the time series all share a fixed discretization (i.e., there are some fixed times $t_1, \\dots, t_L$ corresponding to $y_1, \\dots, y_L$), or can the discretization vary across time series? Is this discretization assumed to be uniform, or can it be irregular as well?\n- It seems to me that the setup is not limited just to forecasting, but could be applied to general conditional generation tasks, e.g., imputation. Have the authors tried anything beyond forecasting?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The investigation of techniques for conditional sampling is pretty thorough, and I think the proposed methods are quite interesting\n- The proposed method obtains fairly strong empirical results, and the empirical evaluation is convincing \n- Throughout the paper is very clear and well-written" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the use of flow matching for time series forecasting. The authors first propose the use of flow matching techniques for unconditional generation using Gaussian process priors (section 3.1.1), followed by a technique for conditioning an unconditional model by sampling a relevant prior $x_0$ (section 3.1.2) or guidance (section 3.1.3), and finally the authors discuss a technique which uses a data-dependent Gaussian process prior for conditional sampling (section 3.2). The proposed methodology is empirically validated on several univariate time series datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- There is some highly relevant related work that the authors do not discuss. [Functional Flow Matching, AISTATS 2024](https://arxiv.org/abs/2305.17209) proposes the use of GP priors in conjunction with flow matching and studies techniques for forecasting with these models. Similarly, [Conditional Flow Matching for Time Series Modeling, SPIGM@ICML 2024](https://openreview.net/forum?id=Hqn4Aj7xrQ) uses GPs with flow matching for time series. The authors should cite these works and discuss the differences with their proposed method.\n- There are some (relatively minor) clarity issues throughout\n - The use of equation 9 was a bit unclear to me. Why is this specific form of $q_1$ chosen? Some justification for this modeling choice would be good.\n - In Section 3.1.2, I am guessing that once we sample $x_0 \\sim q_0(x_0 \\mid y^p)$, then we use $x_0$ as an initial condition for the flow model to generate new samples $y \\mid x_0$. Is this the case? If so, it would help to state this explicitly somewhere in the paper.\n - In Line 303, the authors write \"approximating q_0(x_0 \\mid y^p)$ with $q_0(x_0 \\mid y^p)$. I think I understand what is meant here, but this seems to be a typo." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "**Incorporating Gaussian Process Priors:** The main contribution—replacing the typical isotropic Gaussian prior $q(x_0)$ with a data-dependent conditional prior $q(x_0∣y^p)$ is well-motivated. GP priors are naturally suited for time series due to their ability to model temporal dependencies, and this idea is a clear innovation over existing flow matching methods.\n\n**Empirical Performance:** The empirical results show that TSFlow performs well compared to state-of-the-art models across various benchmark tasks.\n\n**Flexibility in Conditional and Unconditional Modeling:** The approach supports both unconditional and conditional generation. While unconditional generation has fewer use cases compared to conditional generation, it is a feature often overlooked in time-series analysis. The ability to use the same model for both tasks—by applying conditioning only during inference—adds versatility to the paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces TSFlow, a model for probabilistic time series forecasting that enhances generative modeling by incorporating Gaussian Process (GP) priors within the Conditional Flow Matching (CFM) framework. The use of more informative GP priors helps align the prior distribution with the temporal structure of the data, potentially improving both performance and leading to runtime efficiency by simplifying the probability paths. TSFlow is flexible and can be used for both, conditional and unconditional, time-series generation.\n\nThe model demonstrates strong performance across several benchmark datasets, but there are aspects of the paper that could benefit from further clarification and improvement. Depending on the author's response, I am willing to increase my score from \"weak reject\" to \"weak accept\"." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Majors**\n\n**Difficulty in Parsing for Non-Experts:** The paper assumes substantial familiarity with flow matching and related generative methods, making it challenging for readers without a deep background in these specific techniques. It took for me considerable time to fully grasp the concepts, which suggests that the paper might also be difficult to read for a broader audience.\n\n**Lack of Runtime Analysis:** The authors propose replacing the isotropic Gaussian prior with a more complex GP prior. Howeer, they do not address the increased computational cost that comes with using GP priors. Although the GP prior is more suited to time-series tasks, this advantage must be weighed against the computational overhead. The paper should clearly state the theoretical runtime complexity of using GP priors and provide empirical runtime comparisons in the experiments. The reduction in NFEs is a positive, but its trade-off with the computational cost of the GP prior must be considered.\n\n**Baseline Comparison:** A simple baseline for the forecasting tasks could involve using Eq. 6 but with an isotropic Gaussian prior. Please include this method as a comparison partner in Table 3. If it is not a valid approach, it should be explained why such a baseline is excluded.\n\n**Inconsistent Findings on Kernel Choice:** The periodic kernel minimizes the Wasserstein distance in Figure 2, suggesting it aligns well with the data distribution. However, in Table 1, the periodic kernel does not significantly outperform other kernels, and it is not even considered in Table 4.2. This inconsistency is counterintuitive and warrants further discussion. Moreover, the necessity for different hyperparameter choices across tasks (e.g., generative modeling vs. forecasting) weakens the \"one model for all\" argument, suggesting more task-specific tuning may be required.\n\n**Minors:**\n\n**Missing Experiments on Guided Generation (3.1.3):** I could not find experiments on Guided Generation (3.1.3) in the paper. Where can I find them?\n\n**LPS Score Clarification:** The Linear Predictive Score (LPS) is not sufficiently explained. Please provide more details on how the LPS is calculated, specifically on what the linear model is regressed against and what the output represents.\n\n**GP Hyperparameter Optimization:** You state in your paper that you did not fit the GP hyperparameters. This is somewhat unexpected, I would have had a strong guess that learning the hyperparameters of the GP brings a large benefit to the model. Can you perform a small experiment to evaluate the difference in performance with/without GP hyperparameter optimization?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. If my understanding is correct, the overall idea of the model is to push an initial GP to the target time-series. Therefore, the sequence length $L$ in the problem formulation section is equivalent to the vector field dimension $d$ in the background section. Can the author(s) kindly confirm if this is true? It would be nice to clarify the setting a little bit. I think the main source of confusion is that there are two time indices in this paper: the time in flow-matching and the time in the time-series, and these are orthogonal to each other. I think this should be made clear somewhere in the paper.\n\n2. Following the first question, it would be nice to indicate in Figure 1 the time-series index and the flow-matching index. Moreover, different notations should be used. (For instance, on line 202-203, $t$ should not be used for the kernel because it has already been used in Eq. (1).)\n\n3. How would the model respond to a growing $L$? That is, there are two subquestions in this query:\n\ta. What is the time complexity of the model and how does it compare to other models that you benchmarked against?\n\tb. When $L$ is large, Eq. (6) seems to suffer from the curse of dimensionality and the integral would be impossible to discretize. How does the model see this issue, or is it not relevant?\n\n4. The section \"Effect on the Optimal Transport Problem\" only considers the distance between sequences but not anything about the training. Can you show some experiments where models that use the periodic kernel are indeed easier to train on tasks that involve periodicity?\n\n5. On line 302-303, you wrote \"we additionally condition the prior distribution on the observed past data by approximating $q_0(\\mathbf{x}_0 | \\mathbf{y}^p)$ with $q_0(\\mathbf{x}_0 | \\mathbf{y}^p)$.\" I assume there is a typo. What would be the intended sentence?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The model is flexible with many possible options; the informed prior sampling is intuitive.\n\n2. The model shows promising results on univariate benchmark tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a model that uses flow-matching algorithms to generate and forecast univariate time-series. When trained unconditionally, it replaces the isotropic Gaussian prior distribution with an informed one and uses conditional sampling and inferencing. The model itself can also be made conditional. Experiments show that the model surpasses existing ones on benchmark tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The model right now is only discussed and validated on univariate time-series.\n\n2. While there are many proposed options in the model, there lacks an ablation or a comprehensive comparison of different choices. Neither is there any theoretical insights in the proposed model.\n\n3. The presentation of the model needs a bit more clarification. The problem formulation can potentially be expanded a bit more (see questions below). The discussion of different training/inference choices is a bit dense. Maybe some concise pseudocode or flowchart in the main text could be helpful." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose TSFlow, a conditional flow matching model for time series forecasting that leverage domain-specific priors." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024flow,\ntitle={Flow Matching with Gaussian Process Priors for Probabilistic Time Series Forecasting},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=uxVBbSlKQ4},\nnote={under review}\n}" }, "abstract": { "value": "Recent advancements in generative modeling, particularly diffusion models, have opened new directions for time series modeling, achieving state-of-the-art performance in forecasting and synthesis. However, the reliance of diffusion-based models on a simple, fixed prior complicates the generative process since the data and prior distributions differ significantly. We introduce TSFlow, a conditional flow matching (CFM) model for time series that simplifies the generative problem by combining Gaussian processes, optimal transport paths, and data-dependent prior distributions. By incorporating (conditional) Gaussian processes, TSFlow aligns the prior distribution more closely with the temporal structure of the data, enhancing both unconditional and conditional generation. Furthermore, we propose conditional prior sampling to enable probabilistic forecasting with an unconditionally trained model. In our experimental evaluation on eight real-world datasets, we demonstrate the generative capabilities of TSFlow, producing high-quality unconditional samples. Finally, we show that both conditionally and unconditionally trained models achieve competitive results in forecasting benchmarks, surpassing other methods on 6 out of 8 datasets." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "flow matching", "time series forecasting", "generative modeling", "deep learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0f3327414b38b730ae6923baecff14dfd1d5c15d.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Flow Matching with Gaussian Process Priors for Probabilistic Time Series Forecasting" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
uxYbEAEWm4
Knowledge Lift Alignment Fine Tuning
main
Active
PEFT;PLM;LLM;VLM;Multi-modal;Image captioning
foundation or frontier models, including LLMs
3;3;5
4;3;3
2;2;2
2;2;2
1;1;1
3.666667
3.333333
2
2
1
-0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "See the weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The work propose design a Topic Control Mechanism (TCM) to emphasize the target domain-specific knowledge. Combined with Token Topic Modeling (TTM), Masked Region Modeling (MRM), and Text Image Matching (TIM), KLAFT highlights the target related\nknowledge (i.e., the knowledge lift).\n\n- The propsoed KLAFT improves expressive captioning tasks by aligning and amplifying target knowledge, with the potential for\nParameter-Efficient fine tuning (PEFT) at low computational cost." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work present a visual tuning framework, Knowledge Lift Alignment Fine Tuning (KLAFT), which enhances the expressive image captioning capabilities of Pretrained Language Models (PLMs). The innovation of KLAFT lies in its approach to addressing the disparities in knowledge - visual versus textual via MAM and source versus target domain via TCM. These hidden spaces are conceptualized as distinct sub-networks, each possessing specific knowledge, KLAFT adjusts the weights of these sub-networks in a fine-grained manner. The empirical studies demonstrate that KLAFT improves expressive captioning tasks by aligning and amplifying target knowledge." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The writting of this work is quite poor. The inroduction part does not clearly introduce the background and motivation of the problem to be solved. The approach part is also not written in a concise and well-organized manner.\n\n- The approaches compared in this work are not proposed recently. As a result, the validity and innovation of the proposed modules is not convincing.\n\n- The authors did not conduct sufficient ablation experiments for different loss functions and did not give more qualitative analysis." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "* In Figure 1 (left), a plot of token distributions is shown, but there is no information about the dataset. What are the source and target domains? How were these lines plotted? Given the significance of comparing the two domains, this should be one of the most important analyses in the paper.\n* I get the main idea of TCM. Yet, there are some phrases in the paper making the details of TCM unclear to me:\n * In line 234, `\"distribution over tokes, $\\mathbf{V}$. What is $\\mathbf{V}$? It only appears once in the paper.\n * In the paragraph following Eq (5), the term $b_z$ is mentioned twice, though it doesn’t appear in Eq (5). This inconsistency makes it challenging to understand Equation (5) fully. Additionally, the variable $w$ is not explained in the paragraph.\n* In lines 338-339, it's stated that three hyperparameters are all set to 0.1, which seems to be less commonly done. What is the rationale for this choice? Is there any ablation study to support it?\n* In Sec. 5.2, the authors compare S4 to S3 to showcase the effectiveness of TCM. Yet, these two settings differ a lot:\n * S3: \"seq2seq under fine-tuning GPT-2 over 100% training data (COCO+Conceptual Captions)\"\n * S4: \"prefix 100% training data with frozen setting (only COCO)\"\n * Given the differences in both tuning paradigm and datasets, how can TCM’s impact be fairly assessed in this context?\n* Could you provide more details on human evaluation?\n\n*I'm open to adjusting my rating after the authors' response.*" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* Increasing the expressiveness of the generation process in VLMs is a promising research direction.\n* The proposed TCM/TTM approach is intriguing and shows potential.\n* The performance gains over baseline models are satisfactory." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes Knowledge Lift Alignment Fine-Tuning (KLAFT) to enhance the expressiveness of image captioning capabilities in pretrained Vision-Language Models (VLMs). KLAFT primarily introduces a Topic Control Mechanism (TCM) combined with Token Topic Modeling (TTM) to enable topic-guided tuning and decoding for VLMs. The proposed method is evaluated across various settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The presentation of the paper lacks clarity. I’ve outlined some key issues below:\n * The term \"KEIC\" appears three times—once in Figure 1, once in the title of Section 4, and again in the conclusion. These are critical sections. I assume \"KEIC\" should actually be \"KLAFT.\"\n * Some typos or unclear sentences can a problem, e.g. ``tokes`` appears multiple times.\n * The image in Table 1 is not clear, and Figure 1 also lacks effective delivery and clarity.\n * I assume all the citations use ``\\cite{}``. I think most citations should likely use ``\\citet{}`` for smoother integration.\n * And some other issues, see question section as well.\n* Overall, the claims around the design of the Mapping Layer (MaL) and the Modified Attention Mechanism (MAM) may be overstated. These components are fairly common in VLMs, and MAM appears to be a direct adaptation from one of the primary baselines, VisualGPT.\n* The choice of main baselines seems outdated, although results from some recent VLMs (e.g., LLaVA, BLIP2) are also included." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please refer to the Weaknesses. The following is a minor question.\n\n1. Considering the fast-evolving nature of the VLM development, are some recently proposed and commonly used multimodal LLMs, such as VILA or InternVL-2, also applicable to this proposed fine-tuning framework?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Improving the quality of captions generated from VLMs has great potential for several real-world applications, such as manufacturing and healthcare.\n2. The proposed framework is reasonable to address the considered task." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a Knowledge Lift Alignment Fine Tuning (KLAFT) framework to improve the image captioning capability of multimodal LLMs. More specifically, this paper aims to encourage the generated captions to be detailed and comprehensive. To achieve that, this paper explores fine-grained alignment and designs MAM and TCM attention mechanisms. Experiments and quantitative comparisons are conducted on commonly used benchmark datasets, including MS COCO and Conceptual Captions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. As described in the Abstract, the goal of this paper is to generate detailed and comprehensive captions. However, the benchmark datasets adopted by this paper (e.g., MS COCO) are typically coarse-grained with relatively short captions. As we know, current VLMs, like LLaVA, are good at generating detailed and fine-grained image captions. Is using COCO to evaluate the detailed caption capability of VLMs suitable? My concern lies in adopting such traditional image captioning benchmarks (e.g., COCO), which cannot properly measure the detailed caption capability of VLMs and reflect the actual improvement of the proposed method.\n2. What are the differences between the source and target domains claimed in this paper? Does the difference lie in the visual domain shift? Does the difference lie in the caption style? Does the difference lie in the amount of (labeled) data? More clarification and explanation are encouraged to make this paper more easy to read.\n3. The proposed method is trained by multiple training objectives. It is complicated to model training and balance the weights of each loss function (in Eq.(9)). How does this paper design experiments to determine the weight of each loss term? Is the model training sensitive to different weights of loss terms?\n4. The visual presentation of this paper has some room to improve. For example, the image in Table 1 is out of the table. In addition, for Table 3, it is better to directly indicate the meaning of each row in the table instead of just mentioning it in the captions." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "KLAFT aims to generate detailed and comprehensive captions by leveraging fine-grained alignment between PLMs and target domain datasets." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024knowledge,\ntitle={Knowledge Lift Alignment Fine Tuning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=uxYbEAEWm4},\nnote={under review}\n}" }, "abstract": { "value": "We present a visual tuning framework, \\textbf{K}nowledge \\textbf{L}ift \\textbf{A}lignment \\textbf{F}ine \\textbf{T}uning (KLAFT), \nwhich enhances the expressive image captioning capabilities of Pre-trained Language Models (PLMs), including LLMs and VLMs.\nAs this task involves generating more detailed and comprehensive captions than basic image descriptions,\nthe core idea behind KLAFT is that fine-grained alignment could exploit the capabilities of PLMs and a given target domain dataset.\nThis idea motivates and challenges us to explore the framework that deeply understands both given images and text for this alignment and tuning PLMs towards expressive image captioning.\nThis direction modifies the attention mechanism (Modified Attention Mechanism, MAM) and develops both a Topic Control Mechanism (TCM) and their training objectives.\nThe innovation of KLAFT lies in its approach to addressing the disparities in knowledge - visual versus textual via MAM\nand source versus target domain via TCM.\nAs these hidden spaces are conceptualized as distinct sub-networks within the PLM, each possessing specific knowledge,\nKLAFT's unique contribution is in aligning and adjusting the weights of these sub-networks in a fine-grained manner,\nand fine-tuning this PLM.\nOur empirical studies demonstrate that KLAFT significantly improves expressive captioning tasks by aligning and amplifying target knowledge, with the potential for Parameter-Efficient Fine-Tuning (PEFT) at low computational cost." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "PEFT", "PLM", "LLM", "VLM", "Multi-modal", "Image captioning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c1aef593ef14be15d6dce332c782c53eb47a4c65.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Knowledge Lift Alignment Fine Tuning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
uy31tqVuNo
Unbounded: A Generative Infinite Game of Character Life Simulation
main
Active
Text-to-Image Generation;Interactive Image Generation
applications to computer vision, audio, language, and other modalities
5;6
4;4
2;3
2;2
2;3
5.5
4
2.5
2
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How do the authors position their work in the field of technical game design? \n2. What do longer gameplay examples look like? \n3. How do the authors respond to my concerns around their \"Evaluation of LLM Generations\"?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The main strengths of the paper are (1) the originality of entirely basing a virtual pet game on LLMs, (2) the significance of the two primary technical contributions to figure coherent image generation and real time LLM application uses, and (3) the quality of the image portion of the evaluation/experiments. The paper is also overall well-written, especially in terms of the technical aspects. These primarily justify the positive aspects of my above scores." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors present \"UNBOUNDED\" a virtual pet game based on LLMs. The authors present two novel technical elements to support this game: a regional IP-Adapter for character-environment consistency and a domain-specific distillation approach for finetuning a smaller LLM to allow for real time gameplay. The authors then present experiments comparing their approach to baselines in terms of visual quality and consistency, and overall game quality as scored by GPT-4o." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper has two major weaknesses in its present draft, the way it positions itself in terms of prior work and the evaluation. \n\n### Prior Work\n\nThe authors do not appear to have engaged with the field of technical games research at all. This is a shame, as their game can be understood as an AI-based game [1] or specifically a NN game [2]. More broadly, the authors' work in terms of generation would fit into the Procedural Content Generation paradigm [3]. There is a 3+ decades long history of work in this space that is relevant to the authors' work [4], with significant recent work at the intersection of LLMs and game generation [5], and it is crucial that the authors update the introduction and related work to situate their work within this field.\n\n### Evaluation\n\nThe authors primarily evaluate their work in terms of image quality and I have no major concern with these results (though the DreamSim results seem to incorrectly attribute the best score to their system). However, I have major concerns with the \"Evaluation of LLM Generations\", which is functionally the closest thing presented to a holistic evaluation of UNBOUNDED. Firstly, the norm within the area would be to make use of a user study for evaluation purposes [6,7]. Particularly since the authors make such an effort to allow for real time control, this would seem to follow naturally. Secondly, the evaluation is somewhat concerning due to the training process. Specifically, since the distillation approach makes use of GPT-4 to simulate user interaction data and the authors sample for uniqueness/variety, its fairly likely that the newly sampled user interaction data approximations will be very similar to these. Further, the largest model the authors compare to is again a GPT-4 variant. Given that GPT-4 is being used for both generation of data, as the system, and as the evaluation, there's a clear risk for bias. This could be addressed by any number of ways, including automated metrics for measuring game quality developed in prior game generation-like work [8,9]. But at present, it is difficult as an outside reader to get a sense of the actual quality of UNBOUNDED as a game. Failing this, it might be beneficial to include longer examples of gameplay in the supplementary materials at least.\n\n### Figures\n\nWhile these are not of major concern, I wanted to note two aspects of figures that I felt could be improved. First, the authors make use of a large number of fonts in their figures, and their primary font is a bit difficult to read. I'd suggest updating the figures to use a single, more legible font. Second, Figure 4 is largely superfluous. If the authors wish to retain it, it might be a better fit for an appendix.\n\n1. Treanor, M., Zook, A., Eladhari, M. P., Togelius, J., Smith, G., Cook, M., ... & Smith, A. (2015). AI-based game design patterns.\n2. Zhu, J., Villareale, J., Javvaji, N., Risi, S., Löwe, M., Weigelt, R., & Harteveld, C. (2021, May). Player-AI interaction: What neural network games reveal about AI as play. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-17).\n3. Shaker, N., Togelius, J., & Nelson, M. J. (2016). Procedural content generation in games.\n4. Pell, B. (1992). METAGAME: A new challenge for games and learning.\n5. Gallotta, R., Todd, G., Zammit, M., Earle, S., Liapis, A., Togelius, J., & Yannakakis, G. N. (2024). Large language models and games: A survey and roadmap. arXiv preprint arXiv:2402.18659.\n6. Anjum, A., Li, Y., Law, N., Charity, M., & Togelius, J. (2024, May). The Ink Splotch Effect: A case study on ChatGPT as a co-creative game designer. In Proceedings of the 19th International Conference on the Foundations of Digital Games (pp. 1-15).\n7. Guzdial, M., & Riedl, M. O. (2021). Conceptual game expansion. IEEE Transactions on Games, 14(1), 93-106.\n8. Khalifa, A., Green, M. C., Perez-Liebana, D., & Togelius, J. (2017, August). General video game rule generation. In 2017 IEEE Conference on Computational Intelligence and Games (CIG) (pp. 170-177). IEEE.\n9. Guzdial, M., & Riedl, M. (2018, September). Automated game design via conceptual expansion. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (Vol. 14, No. 1, pp. 31-37)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "To which extent are results from Figure 6 consistent with results from Table I?\nHow have you ensured the quality and fairness (for evaluation) of the text prompts sample?\nHow can the current approach maintain narrative consistency over a number of interactions?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This is an ambitious piece of work, tackling difficult technical issues in balancing generation and interaction, as well as various training and fine-tuning protocols. It appears to have innovated on a number of aspects, for instance the dynamic mask to balance character and environment generation. It gives some interesting, detailed, insights such as those of Figure 4. The improvement over IP-Adapter thus appears more than incremental. Another strong points is the Small LLM distillation framework.\nThe comparative LLM evaluation has staged an appropriate number of LLM, capturing size and diversity. \nFinally, the paper demonstrates a good command of recent relevant work and compares its approach to really recent alternative methods (2023, 2024), which is welcome." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work combines various aspects of Generative AI to develop a LLM-based game engine. The design principles refer to open-world, continuous interaction games with limited simultaneous characters but significant opportunities for situation generation and interaction. Capitalizing upon recent research in the field, the system introduces a number of technical innovations both in terms of architecture (in the form of a dynamic regional prompt adapter) and in term of implementation (distillation of Small LLM). With an existing implementation, the paper includes elements of comparative evaluation, both quantitative and qualitative." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "From a fundamental perspective, there is a lack of awareness of some game design issues. The introduction is relatively naïve considering how these issues are addressed in game/play theory (Caillois, Huizinga). With the fine line between simulation and entertainment, and notwithstanding the reference to The Sims and Tamagotchi, contemporary audiences tend to have gameplay expectations that can only be met with sustained and meaningful (in the narrative sense) interactions. This feeds into the overall perspective of the evaluation methods which in the paper are primarily focused individual ‘event’ generation. While sustained UX evaluation may be at odds with the system’s current level of maturity, some of this could have been addressed through even slightly longer exchanges or chains of events construed as minimal narrative experience (which is claimed in the abstract but does not really materialize in the examples). The small examples given, including on Figure 1, do not convey a very strong sense of gameplay. \n\nThere is no evidence that the number of user-simulator examples is at all sufficient for the intended purpose of multi-topic data collection. Nor is there any discussion of some hierarchical organization of topics that would come close to scenarios, game genres or elementary forms of plot backbones.\n\nDespite the statement on replicability, the level of implementation details is unclear. Implementation details are rather cursory and uneven with some in-depth details but no systematic discussion, and minor duplications from other sections.\n\nEvaluation has both positive (see above) and negative aspects.\nUnless I am mistaken, Table I shows a couple of inconsistencies for results.\nOn Environment Consistency, how can 0.322 be the best score when it is in-between 0.381 and 0.257? How can 0.675 be the best score in-between 0.595 and 0.832?\nIt is difficult to find the Qualitative component of evaluation very convincing in the absence of a more systematic approach for generating test situations. From a similar perspective, the example Prompts do not contain advanced mechanisms for controlling response, or specific examples. One would thus wonder how LLM response can be kept consistent across runs, and what type of statistical sampling has been associated to LLM use?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024unbounded,\ntitle={Unbounded: A Generative Infinite Game of Character Life Simulation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=uy31tqVuNo},\nnote={under review}\n}" }, "abstract": { "value": "We introduce the concept of a generative infinite game, a video game that transcends the traditional boundaries of finite, hard-coded systems by using generative models. Inspired by James P. Carse's distinction between finite and infinite games, we leverage recent advances in generative AI to create Unbounded: a game of character life simulation that is fully encapsulated in generative models. Specifically, Unbounded draws inspiration from sandbox life simulations and allows you to interact with your autonomous virtual character in a virtual world by feeding, playing with and guiding it - with open-ended mechanics generated by an LLM, some of which can be emergent. In order to develop Unbounded, we propose technical innovations in both the LLM and visual generation domains. Specifically, we present: (1) a specialized, distilled large language model (LLM) that dynamically generates game mechanics, narratives, and character interactions in real-time, and (2) a new dynamic regional image prompt Adapter (IP-Adapter) for vision models that ensures consistent yet flexible visual generation of a character across multiple environments. We evaluate our system through both qualitative and quantitative analysis, showing significant improvements in character life simulation, user instruction following, narrative coherence, and visual consistency for both characters and the environments compared to traditional related approaches." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Text-to-Image Generation", "Interactive Image Generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9802d28b8f281f44f05ccd196684783a80a6e51a.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Unbounded: A Generative Infinite Game of Character Life Simulation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
uy4EavBEwl
Reconciling Model Multiplicity for Downstream Decision Making
main
Active
model multiplicity;multi-calibration;decision-making;uncertainty quantification
alignment, fairness, safety, privacy, and societal considerations
3;6;6;6
4;3;4;3
2;3;3;3
2;3;3;2
1;3;3;3
5.25
3.5
2.75
2.5
2.5
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- W.r.t. the finite sample analysis, is the proposed algorithm robust towards any noise in the samples?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This paper studied an important and practical problem of model multiplicity. The paper overall is well-organized and presented with a good clarity. \n- It is in particular helpful to have the illustrative example in Figure 1, which directly shows that it is insufficient to only update two predictive models so that they have improved squared loss and nearly agree on their individual predictions almost everywhere.\n- Theoretical guarantee shows that the new algorithm ReDCal provides an improved accuracy and approximately agrees on the best-response action almost everywhere compared to the prior work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studied the problem of model multiplicity in downstream decision-making. In this setting, two predictive models of equivalent accuracy do not agree on their predictions for the downstream decision-making problem.\nThe paper proposed a new calibration framework which calibrates the predictive models with respect to both a finite set of downstream decision-making problems and the individual probability prediction. Further, the paper proposed an algorithm that first reconciles the differences in individual probability prediction then calibrates the updated models so that they are indistinguishable from the true probability distribution to the decision-makers together with its finite-sample analysis. Numerical experiments on two datasets demonstrate the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The experimental results with the HAM10000 dataset show substantially larger error bars, and much less smooth convergence. It is helpful to provide more details on this differences between the two sets of results.\n- The experiments only compared to one other baseline proposed in (Roth et al 2023). How does the proposed algorithm compared to other related works in the model multiplicity?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. One thing I didn't understand was if you are given two models at the end of ReDCal, how do you choose which model to use? Why is it important that the two models agree in the decisions they make if you already control the decision-loss gap?\n2. If you don't want to affect the decision-loss, it makes sense to me that you just wouldn't change the two predictors. Is that the case and is it reflected in your algorithm if you choose the correct parameters?\n3. Why does the algorithm converge so fast in the numerics? The upper bound on the time steps seems to suggest the number of time steps that should be run before convergence should be large given the choices of $\\beta$ and $\\alpha$. Does this imply the bound is is potentially very loose and is it possible for it to be made tighter?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper highlights an important problem, that improvements to prediction models can hurt downstream decision-making since downstream decision-makers may have loss functions that do not necessarily align with prediction accuracy. The paper combines existing work in multi-calibration with work in model multiplicity to solve this problem. The algorithm proposed by the paper seems novel and provides what seems to be sensible theoretical guarantees that trade-off between improvements to prediction accuracy and preserving decision-loss. This seems to be a relatively significant improvement to the Reconcile algorithm." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the Reconcile algorithm proposed in [1] which takes two prediction models and outputs two models that have better prediction accuracy which is quantified by the Brier score. They expand on the existing method by proposing a new reconcile algorithm that looks to improve prediction accuracy while also preserving the downstream decision-loss. They provide theoretical bounds that show their method trades off improvement to prediction accuracy at the cost of downstream decision-loss. They provide numerical results to show their algorithm helps reduce the loss gap after running their version of the Reconcile algorithm. \n\n\n\n-----\n[1] Aaron Roth, Alexander Tolbert, and Scott Weinstein. Reconciling individual probability fore- casts. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Trans- parency, FAccT ’23, pp. 101–110, New York, NY, USA, 2023. Association for Computing Ma- chinery. ISBN 9798400701924. doi: 10.1145/3593013.3593980. URL https://doi.org/ 10.1145/3593013.3593980." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper has weaknesses in its presentation as well as results that seem somewhat suspicious/hard to interpret precisely.\n\nThe following items could be addressed and improved for presentation:\n- In the paper's introduction, the authors mention calibration several times, but for someone not immediately familiar with the literature it's hard to understand what it is formally. It becomes a little better defined at Lemma 2.6, but having extra background or explanation in the introduction would be helpful in understanding the high-level concepts used to construction the paper's proposed algorithm. \n- The paper could provide better background on the Reconcile algorithm which seems to be a key motivation for pursuing the work. There is no high level description of how the algorithm works, so Theorem 2.7 for example feels impossible to verify. Additionally, the Reconcile algorithm reproduced in the appendix seems to be missing key elements, like defining $h$ and $\\mu$. The former seems to be an important update step.\n- The paper does not define notation well. For example $\\mu$ and $A$ are not defined or are defined offhandedly, making it hard to decipher when it shows up later in the text.\n- In Theorem 2.7 the distribution $\\mathcal{D}$ is not defined which makes the last expectation in the proof hard to verify. I could not immediately come up with a distribution where the expectation is 1/2. \n- Some notation seems to be incorrect or have a typo. For example the Definition 2.4 of $E_{\\ell,a}$ is an indicator outside a set. I don't think this is standard notation and is not precise. My guess is it's a typo.\n- Is the notation $\\ell_a$ necessary? If so how do you obtain $\\ell_a$ given $\\ell$?\n- The descriptions of Decision-Calibration, ReDCal, and Decision-Calibration + ReDCal are confusing to me. What exactly is the algorithm for the first and last algorithms. When you count the number of time steps for ReDCal, which loop are you counting? Does it include the loops in decision-calibration? \n\nThe following items can be addressed to help improve the results of the paper:\n\n**Questions related to Theorem 3.4**\n- In Theorem 3.4, does item 1. have a typo? \n- Define $A$ the number of actions in the theorem statement for part 3. since it is off-handedly defined somewhere else in the paper. \n- How do I choose $\\beta$, $\\alpha$, and $\\eta$ if I wanted to guarantee a certain decision-making loss level while optimizing the Brier scores? Managing the trade-off between prediction accuracy and decision-loss would help improve the impact of this theoretical result. Right now it seems hard since the term $T_i \\beta$ seems hard to control.\n- The decision gap in the numerical results don't seem to correspond to the bound in 3. Shouldn't it be close to 0 if you choose $\\beta = 0.00001$?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The paper currently appears mostly in good shape; my main small improvement suggestions are about the experimental section (at least by lightly tweaking existing experiments/adding another simple but conceptual one). \n\nIn addition, even though the writing of the paper is mostly quite good, it could still be improved in a few places. For instance, the subsection about handling more than 2 models is somewhat confusingly written: e.g. consider the sentence about exponentially many output predictors in the number of models k, followed by the sentence that union and Chernoff-Hoeffding bounds \"suggest\" that the sample complexity scales linearly in k --- a reader who may not have checked the details in the appendix may easily confuse this for saying that the method may not necessarily be scalable in k, even though it actually is. Also, the event collection setup in that case, which involves reconciling a single base model with every other model, is not described that well in prose --- so consider e.g. including a quick diagram which could e.g. have the base model with \"reconciliation arrows\" pointing to the other models, or something similar." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper is overall a good and novel contribution to the predictive multiplicity literature. Specifically:\n\n+ The contribution of this paper appears new in the model multiplicity literature: it gives a rigorous method for reconciliation of any two models with provable guarantees in terms of the resulting model loss (for which only one algorithm exists in the literature), while at the same time ensuring in a rigorous way that downstream decision making is not affected negatively (which is new); and it moreover extends this method to more than 2 models in a natural way. \n\n+ In fact, from what I can tell, it has an even broader scope than typically considered in the model multiplicity context, as it does not require the to-be-reconciled models to start off having similar or equal accuracy. When the to-be-reconciled models do have similar accuracy to begin with, then the paper's contribution intuitively appears to offer the interesting and thought-provoking semantics of: \"given two different models with similar performance, you can bypass the multiplicity issue by finding an even higher-performing model (which ensembles/combines the two initial models), with similar or better downstream decision making guarantees\"." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a new algorithm for reconciling two or more models in a supervised setting; the goal is to address the model multiplicity problem (which refers to the phenomenon that there may be > 1 model with similar accuracy but substantial disagreements among predictions) in a decision-aware way. Specifically, the paper shows how to perform model reconciliation in a way that (1) makes the models agree in the prediction space, (2) does not hurt (and possibly even improves) the squared loss of either model, and (3) does not hurt (and possibly even improves) the decision loss of a given downstream decision-maker. (The decision loss is an arbitrary linear loss that the decision maker uses to map predictions to actions.) In addition, the paper performs preliminary experiments showing that the method outperforms (with respect to decision loss) a prior reconciliation algorithm on some vision datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While overall I believe this to be a good-quality paper, there is the following (relatively non-major) consideration that I would call a weakness:\n\n- The paper currently appears written with a primarily theoretical audience in mind, but I think it could still do a better/more thorough job coming up with/describing experiments. It currently gives two semi-synthetic ones. In the first one, linear decision losses are generated in a Gaussian manner --- so that the two vision models are essentially being calibrated to realizations of random noise (granted, the experiment does illustrate the point that the new method achieves better decision-making properties than the old one, but the setup still sounds strange). In the second one, the decision loss is a \"loss function motivated by medical domain knowledge in Zhao et al (2021) and additional random noise\" --- and in this case, the paper should at the very least clearly state what the loss function in Zhao et al (2021) is, and perhaps provide more principled perturbations of it than Gaussian noise ones.\n\nNeither of these experiments, however, connects in any way to the examples given earlier in the paper on how disregarding decision loss can lead to its deterioration during the reconciliation procedure. These examples are in some sense prototypical of how reconciliation could hurt downstream decision making; and given how easy it is to come up with a synthetic task/models + synthetic loss clearly showcasing them (i.e. making it to where some significant mass of items will flip across the decision boundary as a result of the reconciliation), I would like to urge the authors to do just that as it will make the paper more logically coherent." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see my questions above. Additionally: \n\n1) How does the framework perform across different domains or datasets that exhibit varying levels of complexity and noise?\n2) As mentioned, determining optimal ranges and conditions for key hyperparameters parameters across varying contexts can be challenging. \n3) Although the paper addresses computational efficiency, the scalability of the algorithm for very large datasets remains an open question. How will the computational complexity grow with increased data size?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper presents a novel framework to address the issue of model multiplicity in predictive modeling for decision-making, using multi-calibration techniques. It identifies specific hyperparameters, such as loss margin and decision-calibration tolerance as the driving parameters of key results. In practice, these parameters will need to be adjusted to mediate trade-offs between model fairness and computational efficiency. The authors provide both theoretical insights and empirical validations of their approach. The paper makes a reasonable contribution by integrating theoretical multi-calibration concepts into empirical settings and testing them across multiple datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper provides a framework using multi-calibration to reconcile model multiplicity in downstream decisions. It seeks to develop a framework to address the inherent discrepancies between predictive models, which often result in varying decision recommendations despite equivalent accuracy levels. The authors propose an algorithm that aligns predictive models with decision-making, improving agreement on best-response actions and losses. Empirical validation shows enhancements in downstream decision performance and addressing prediction disagreements, relevant to multi-calibration advancements. Proposed approach improves the consistency and utility of models in decision-based applications, including cases where only empirical data is available." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Comments/Questions to Authors: \n•\tThe initial concept of multi-calibration was introduced by Hebert-Johnson et al. in 2017 which focused on ensuring fairness across overlapping subpopulations. Recent advancement includes extension of multi-calibration to game dynamics and multi-objective learning – for example, work by Nika Haghtalab and Eric Zhao, which utilize game dynamics to connect fairness and optimization. Another line of work by Roth et al deals with a wide array of issues from online multicalibrated learning to omnipredictors. While the article does a good job at comparing and showing differences with respect to work of Roth, the conceptual difference with respect to work of Haghtalab et al. would be much appreciated. \n\n•\tThe algorithm’s performance depends on several hyperparameters (such as loss margin and decision-calibration tolerance). It seems techniques to fine-tune parameters such as loss margin and decision-calibration tolerance are crucial for achieving the results and trade-offs that is important in practice (e.g., between prediction fairness and accuracy). These are critical for adapting multi-calibration models to robustly function across heterogeneous data distributions and complex decision environments, the paper should clearly discuss how the choice of these hyperparameters impact the performance of proposed algorithm. I am also curious about the third result of Theorem 3.2 (remark 3.3). How does the impact of this result show up in practice/experiments? I believe a major limitation of the paper is lack of explicit focus on loss margin and decision-calibration tolerance hyperparameters.\n\n•\tThe paper could better motivate the empirical tests conducted and discuss how they confirm the theoretical claims and show improvements in decision-making outcomes and model agreement. Avenues for future work on validating the proposed algorithm's effectiveness in real-world data scenarios should be discussed. \n\n•\tAdditional literature – please clarify the relationships / novelty of this paper with respect to work of these authors. I would be especially interested in knowing the relevance of this paper with issues of fairness-accuracy tradeoff. \n\n“Inference for an Algorithmic Fairness-Accuracy Frontier,” Yiqi Liu and Francesca Molinari (2024). This work provides a consistent estimator for a fairness-accuracy frontier. Method for testing fairness-related hypotheses in algorithmic decision-making seems relevant. \n\n“Fair Representation: Guaranteeing Approximate Multiple Group Fairness for Unknown Tasks,” Xudong Shen and Yongkang Wong and Mohan S. Kankanhalli, IEEE Transactions on Pattern Analysis and Machine Intelligence (2021). Explores approximate fairness using fair representation across multiple fairness notions. \n\n“Multigroup Robustness,” Lunjia Hu and Charlotte Peale and Judy Hanwen Shen (2024). This work establishes a connection between multigroup fairness and robustness and discusses robustness algorithms tailored for subpopulations under data corruption.\n\n“Inherent Trade-Offs in the Fair Determination of Risk Scores,” J. Kleinberg and S. Mullainathan and Manish Raghavan (2016). Shows inherent trade-offs in algorithmic fairness in risk scores and formalizes three fairness conditions and proves their incompatibility in most cases.\n\n“Multi-group Agnostic PAC Learnability,” G. Rothblum and G. Yona, International Conference on Machine Learning (2021). Provides a framework for multi-group agnostic PAC learning.\n\nAlso important is work by Michael P. Kim et al. on Multiaccuracy: Black-Box Post-Processing for Fairness in Classification (2018): It introduces a foundational framework for ensuring fairness in classification, with a focus on multi-group fairness using multiaccuracy post-processing techniques. This paper sets a critical baseline for future research on fairness in predictions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024reconciling,\ntitle={Reconciling Model Multiplicity for Downstream Decision Making},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=uy4EavBEwl},\nnote={under review}\n}" }, "abstract": { "value": "We consider the problem of \\emph{model multiplicity} in downstream decision-making, a setting where two predictive models of equivalent accuracy cannot agree on what action to take for a downstream decision-making problem. Prior work attempts to address model multiplicity by resolving prediction disagreement between models. However, we show that even when the two predictive models approximately agree on their individual predictions almost everywhere, these models can lead the downstream decision-maker to take actions with substantially higher losses. We address this issue by proposing a framework that \\emph{calibrates} the predictive models with respect to both a finite set of downstream decision-making problems and the individual probability prediction. Specifically, leveraging tools from multi-calibration, we provide an algorithm that, at each time-step, first reconciles the differences in individual probability prediction, then calibrates the updated models such that they are indistinguishable from the true probability distribution to the decision-makers. We extend our results to the setting where one does not have direct access to the true probability distribution and instead relies on a set of i.i.d data to be the empirical distribution. Furthermore, we generalize our results to the settings where one has more than two predictive models and an infinitely large downstream action set. Finally, we provide a set of experiments to evaluate our methods empirically. Compared to existing work, our proposed algorithm creates a pair of predictive models with improved downstream decision-making losses and agrees on their best-response actions almost everywhere." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "model multiplicity", "multi-calibration", "decision-making", "uncertainty quantification" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/aac0d0609216b2203552241e99ed33485a586a95.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Reconciling Model Multiplicity for Downstream Decision Making" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
uy9oR0nYCW
Toward Robust Real-World Audio Deepfake Detection: Closing the Explainability Gap
main
Active
self-supervised learning;explainability;deepfake audio;generalizability
interpretability and explainable AI
1;1;3;5
5;4;4;4
1;2;2;3
1;1;1;2
3;1;2;3
2.5
4.25
2
1.25
2.25
-0.522233
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please see the weakness listed above." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Authors provide an analysis of explainable and interpretable methods for audio deepfake detection methods. The authors compare these methods on two different benchmark datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper analyses the existing explainable methods, such as Occlusion and Attention visualization, for deepfake audio detection tasks. The authors show results using three baseline models such as AST, GBDT, Wav2Vec. The authors evaluate these methods on two existing datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper is merely an analysis paper of existing explainable methods. There are no significant novel contributions to this paper. The authors mention in line 052 that the contributions are \"Empirical evaluations of novel explainability methods for audio transformers\". However, I cannot judge what novelty is in attention visualization for transformers. It has happened a lot in literature. \n- When authors compare explainable methods for audio deepfake detection, they must show the results on existing approaches such as Shapley[1]. \n- The authors should include more explainable mechanisms as baselines, such as [2], [3].\n- The authors should show results on multilingual deepfake audio datasets such as MLAAD, DECRO, and WaveFake since AST and Wav2vec are pre-trained on the AudioSet dataset, primarily English. Evaluating a multilingual dataset would help strengthen the analysis.\n- Authors show results on the ASVspoof5 dataset. The authors mention that the dataset was released in June 2024 (line 750). However, The dataset is still not available for public use and review. Authors should show the results on the publicly available datasets.\n- FakeAVCeleb dataset is primarily an audio-video deepfake dataset, not only for audio deepfake detection. The FakeAVceleb dataset contains 500 real videos, which means there should be 500 real audio. However, authors in line 7716 mention 9712 real audio samples. Why is there a difference in the numbers?\n- Even the baseline models such as GBDT, Wav2Vec and AST are general audio architectures, not specifically designed for audio deepfake detection. Authors must show results on models such as ASSIST, RawGAT-ST and some state space models such as RawBMamaba.\n- More then 50% of the paper is just explaining trivial and non paper contributions only.\n\n\n\n[1] Explaining deep learning models for spoofing and deepfake detection with SHapley Additive exPlanations\n[2] Listen to Interpret: Post-hoc Interpretability for Audio Networks with NMF\n[3] Focal Modulation Networks for Interpretable Sound Classification" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. It is recommended to incorporate a wider variety of audio forgery datasets to validate the model's generalization capability across different forgery techniques, aligning more closely with the diversity encountered in real-world scenarios.\n\n2. While interpretability is essential, it would be beneficial to analyze how interpretability can contribute to designing better models or evaluating existing ones. Adding such analysis could enhance the paper's contribution by illustrating the practical impact of interpretability on model improvement and assessment." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper introduces an interpretability framework for the audio domain, incorporating interpretability techniques from visual and natural language processing into audio Deepfake detection, providing a clearer interpretative path for the model's black-box decision-making process.\n\n2. Two interpretability methods (attention visualization and occlusion techniques) are systematically explored to assess the interpretability of Transformer-based audio detection models, with a comparison of each method's strengths and weaknesses.\n\n3. Cross-validation using the ASVspoof and FakeAVCeleb datasets demonstrates the model's generalization capability across varying data distributions, simulating data shift scenarios common in real-world applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the limitations of existing audio Deepfake detection models in terms of generalization ability and interpretability in real-world scenarios. It proposes a novel interpretability approach and establishes a new benchmark to assess model generalization performance. The study trains models on the ASVspoof dataset and evaluates them on the FakeAVCeleb dataset, demonstrating the superior performance of Transformer-based audio detection models on unseen data. Additionally, the paper introduces attention visualization and occlusion techniques to enhance model interpretability, aiming to bridge the gap between model performance and interpretability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The author mentions three limitations in the Limitation section, of which the latter two could serve as directions for future work. However, the first limitation needs to be addressed in the current phase of the task: relying solely on the ASVspoof and FakeAVCeleb datasets may not cover the full range of audio deepfake techniques encountered in real-world scenarios, presenting a dataset limitation. A wider variety of datasets and scenarios is needed to demonstrate the robustness and completeness of the current approach.\n\n2. The introduced attention visualization and occlusion techniques are computationally intensive, potentially affecting the efficiency of practical deployment. Further analysis and comparison of computational costs would be beneficial.\n\n3. The paper’s contribution is somewhat limited, as occlusion and attention visualization are commonly used techniques in computer vision and natural language processing. While adapting these methods for audio Deepfake detection is interesting, the lack of specific modifications tailored to the characteristics of audio forgery detection tasks reduces the overall impact of the proposed approach. Simple method transfer limits the originality of the contribution." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How are these attention roll-out and image occlusion-based analyses aiding explainability specific to the audio deepfake analysis, and how does this contribution differ from the contributions of attention roll-out and image occlusion-based analysis methods in image feature explainability?\n\n2. Are the normalized token attention plots taken as an average of multiple audio samples or the entire dataset? How do these token attention plots vary with different samples? Further information is needed on the normalized token attention plots. \n\n3. It is stated that “..we can pinpoint specific frames that were instrumental in the classification and inspect them more closely…. and we observe that influential tokens typically appear in groups.” This is one of the primary focus of the paper, and further analysis should be done to elaborate these findings.\n\n4. What is the significance of training ASVspoof5 and inferring on FakeAVCeleb over existing benchmarks, beyond the test of generalizability? What specific challenges might inferring on FakeAVCeleb bring that training on ASVspoof5 would not cover?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The authors have analyzed the information available from Attention Rollouts and Image Occlusion methods. They also noted that some very short frames from audio signal representation are influential to transformers in classifying and these frames typically appear in groups, which, if further explored, may potentially lead to better interpretability of audio deepfake classifications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a novel explainability framework for audio transformers in audio deepfake classification. It proposes utilizing image occlusion to detect feature importance and attention roll-out to understand features better. It also open-sources a novel benchmark for detecting audio deepfakes in real-world cases, which consists of training on ASVspoof5 dataset and testing on FakeAVCeleb dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "As the authors themselves have pointed out, attention roll-out and image occlusion-based analysis have been in existence for quite some time, but the novelty of the proposed work lies in applying them in spectrograms to aid in the explainability of audio deepfake analysis. However, how these attention roll-out and image occlusion-based analyses are aiding explainability specific to the audio deepfake analysis is not adequately explained, and how their contribution differs from already existing contributions of attention roll-out and image occlusion-based analysis methods in image feature explainability remains unclear. \n\nThey have utilized the occlusion method in an attempt to explain how the model is reaching these decisions, but as they themselves pointed out, it was not helpful in explaining the model’s decision-making. They have also used an attention visualization method and stated that they can attribute specific frames that were instrumental to classification. However, using attention visualization to attribute where a transformer model is putting importance is not novel, and their analysis does not show enough contribution specific to explaining the decision process in audio deepfake classification in transformers. \n\nThey have proposed a new benchmark, which consisted of training on one existing dataset and testing on another. They have not provided adequate explanations as to how their novel benchmark would be more helpful in audio deepfake classification, and the idea of testing on a new dataset itself is not particularly novel." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What is the technical novelty contribution of the paper's usage of existing explainability methods? Were any changes needed to adapt these existing methods for audio classification? Why not introduce modifications that can take advantage of features specific to audio modality?\n2. The result of the explainability methods didn't seem helpful for explaining model behavior. Can you provide more insights regarding this? For example, a concrete deepfake audio sample and its corresponding outputs for explainability and your analysis that will help reason about model behavior? Also, it would be helpful if a scenario is provided where the techniques are useful in practice and have a big impact on our understanding of a model's capabilities.\n3. Why is the benchmark considered a novel contribution? This kind of evaluation is very common in deep learning models." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The paper is very well written, the concepts are easily understood and the limitations of the work are discussed as well.\n2. The subject of explainability in audio deepfake detection is a significant and timely problem, especially as deepfake generation technology is readily accessible to the public and has a high capacity of being misused. Research into this subject is very important due to the impact on society this technology can have.\n3. The paper makes a difference between interpretability and explainability and proposes that explainable methods should provide interpretable explanations that are sample-specific, time-specific, and feature-specific. This robust definition of explainability would ensure greater applicability of these methods to interpret black-box models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper focuses on the limitations of explainability in current audio deepfake detection methods and provides three-fold contributions to the subject. Firstly, the paper proposes a conceptual explainability framework emphasizing sample-specific, time-specific, and feature-specific explanations interpretable by humans. Secondly, the paper provides empirical evaluations of novel explainability methods for transformer-based detection models, using occlusion and attention visualization techniques. The occlusion method masks out sections of a mel-spectrogram to reveal which parts are most important for final classification. The attention visualization technique and roll-out method are utilized to get a distribution of attention across all layers and can be used to point out parts of the input sequence most relevant for the final classification. Finally, the paper also introduces a novel framework to evaluate the generalizability of deepfake classifiers, where models are trained on one dataset but evaluated on another dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Both the proposed methods for audio explainability, occlusion and attention visualization, are already existing methods that are very common in literature, especially for vision and language tasks. This is also recognized by the paper, which mentioned that their contribution is porting the methods to the audio task. But to do that, the paper didn't make any modality-specific changes to the methods or edit the methods in any way. For the occlusion method, it's well-known that mel-spectrograms can be treated as images and the paper directly used them on the traditional method. For the attention-visualization method, this is modality agnostic and is based on transformer architecture which takes tokens as input. The roll-out attention method was introduced for natural language tokens and the paper claims to adapt the method for audio tokens, but it's unclear what kind of modifications they did except just replacing the tokens. So the novelty introduced by the paper in these methods is questionable.\n2. The paper provided the results of their methods in Figure 4 and Figure 5. First of all, the paper understands that the result in Figure 4 doesn't help explain the model's decision-making (line 420) and instead is suggestive of transformer model behavior. Second of all, the result in Figure 5 is also not very helpful in human interpretation. When these methods are applied to a visual or textual domain, a human can more easily interpret the segment and decision-making rationale. This suggests that more work needs to be done in the audio domain to make the results more human-interpretable. These observations are also recognized by the paper in their limitation section. So the usability of these methods is questionable.\n3. The paper claims to introduce a novel benchmark to evaluate the generalization capabilities of deepfake audio classifiers, where they train the model in the ASVspoof5 dataset and evaluate it on the FakeAVCeleb dataset. Here the only contribution of the benchmark is training on one dataset and evaluating on another dataset. This mechanism is already well-known and well-practiced to show the generalization capability of a model to out-of-domain datasets. Here, there is no contribution to the dataset, no new evaluation metric is introduced or any other changes are proposed. Thus, it's questionable to consider this benchmark as a contribution. The abstract also claims to \"open-source a novel benchmark for real-world generalizability\". The question is what is there to \"open-source\" here, as the datasets are already available." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Novel explainability methods and a generalizability benchmark for deepfake audio detection." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024toward,\ntitle={Toward Robust Real-World Audio Deepfake Detection: Closing the Explainability Gap},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=uy9oR0nYCW},\nnote={under review}\n}" }, "abstract": { "value": "The rapid proliferation of AI-manipulated or generated audio deepfakes poses serious challenges to media integrity and election security. Current AI-driven detection solutions lack explainability and underperform in real-world settings. In this paper, we introduce novel explainability methods for state-of-the-art transformer-based audio deepfake detectors and open-source a novel benchmark for real-world generalizability. By narrowing the explainability gap between transformer-based audio deepfake detectors and traditional methods, our results not only build trust with human experts, but also pave the way for unlocking the potential of citizen intelligence to overcome the scalability issue in audio deepfake detection." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "self-supervised learning", "explainability", "deepfake audio", "generalizability" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e7538d6cb10fefbb335617742b271ff0af53c441.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/6f3b568cf9e13f170898a344e43cdcfff98b28b1.zip" }, "title": { "value": "Toward Robust Real-World Audio Deepfake Detection: Closing the Explainability Gap" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
uyzkKPvVyS
Geometry-aware Score Distillation via 3D Consistent Noising and Gradients
main
Active
Diffusion Models;Score Distillation Sampling;Text-to-3D Generation
applications to computer vision, audio, language, and other modalities
3;5;5;6
4;4;4;3
2;2;2;3
2;2;2;3
1;3;2;3
4.75
3.75
2.25
2.25
2.25
-0.662266
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Adding an algorithm highlighting the differences from the original SDS would help readers better understand the optimization process.\n\nSince the paper claims to introduce minimal additional computational cost compared to SDS, more details regarding this cost should be included.\n\nIn addition to SDS-like optimization methods, there is another line of text-to-3D generation approaches that directly train on large-scale 3D datasets to enable feedforward generation, such as [1, 2]. Although these approaches are not directly comparable, clarifying this distinction in the related work section would be helpful.\n\nConsidering similar efforts to restrict the noise sampling space, a noise recalibration scheme was introduced in [3]. A discussion on the similarities and differences is recommended.\n\nref:\n\n [1] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation.\n [2] 3DTopia: Large Text-to-3D Generation Model with Hybrid Diffusion Priors.\n [3] Diverse and Stable 2D Diffusion Guided Text to 3D Generation with Noise Recalibration." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper addresses an important issue - multi-view inconsistency in SDS. \nThe method is well-motivated, and the design is intuitively sound.\nThe proposed method is shown to work well with various SDS variants, including 3DFuse, GaussianDreamer and ProlificDreamer.\n\nThe presentation is clear.\nGood visualizations in Fig. 3, Fig. 4 and Fig. 5." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on optimization based text-to-3D generation. The authors propose a geometry-aware score distillation method to address the multi-view consistency issue in SDS. \nSpecifically, the authors propose to use 3D consistent noising by warping noises of nearby views with the current 3D modelling parameters. Based on this, a correspondence-aware gradient consistency loss is introduced to encourage the multiview consistency of SDS gradients between nearby viewpoints." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "No video demos were shown to validate the generated 3D results.\nSome of the generated results still appeal oversmoothness and unrealistic (Fig. 6, Fig. 11)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the weakness part." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Efficient Incorporation of 3D Consistency: The paper directly adjusts the gradients of Score Distillation Sampling (SDS) without requiring 3D data to fine-tune a 2D diffusion model. This allows for the incorporation of 3D consistency with minimal computational overhead.\n\n2. Significant Improvement over Baselines: The proposed method shows noticeable enhancements in both 3D consistency and appearance details when compared to baseline models such as GaussianDreamer, ProlificDreamer, and MVDream." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a method called Geometry-aware Score Distillation. The experimental results demonstrate that the proposed approach can improve performance and address geometric inconsistencies in SDS-based text-to-3D generation tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Dependency on Initial Point Cloud: The method requires establishing correspondences between different views, necessitating an initial point cloud. The paper primarily compares against Gaussian splatting-based methods like GaussianDreamer. According to previous literature [1], when shape initialization is used as a constraint, the Janus problem is substantially mitigated. Therefore, the effectiveness of the proposed method requires more rigorous justification.\n\n2. Comparative Evaluation Concerns: In comparing their method with ProlificDreamer, the authors use 3D-Fuse—originally designed to address the Janus problem—as a baseline. This choice potentially diminishes the perceived effectiveness of the proposed method. Additionally, as observed in Figure 7, the method does not significantly enhance 3D consistency compared to ProlificDreamer; it primarily reduces floating artifacts in the background. Similar issues have been discussed in prior work [2] and can be addressed by adjusting the Classifier-Free Guidance (CFG) in diffusion models. Thus, further experimental evidence is needed to substantiate the authors' claim of enhanced 3D consistency.\n\n[1] Chen, Cheng, et al. \"Sculpt3d: Multi-view consistent text-to-3d generation with sparse 3d prior.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n\n[2] Yang, Xiaofeng, et al. \"Learn to Optimize Denoising Scores: A Unified and Improved Diffusion Prior for 3D Generation.\" European Conference on Computer Vision. Springer, Cham, 2025." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N.A." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My major concern is the applicability of the proposed method in latent diffusion models. Please refer to the weaknesses outlined above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-organized, easy to follow, and clearly presented.\n2. The proposed method offers a fresh perspective on the multi-face Janus problem of the SDS loss, particularly from the standpoint of random noise sampling.\n3. Although the solution is an extension of existing work, it is still novel in the 3D generation field." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to address the well-known Janus problem of the SDS loss. Specifically, it proposes modifying the noise sampling process of the SDS loss by replacing random noise with 3D-consistent noise. The 3D-consistent noise is generated by extending the integral noise method of Chang et al. (2024) using an intermediate point cloud representation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. My primary concern lies in the use of latent diffusion models, as all experiments in the paper are conducted on latent diffusion models. While I do not doubt that the proposed method could be effective on pixel-space diffusion models, the entire analysis may not be applicable to latent diffusion models.\n\n- **First,** 3D-consistent noise in latent space does NOT equate to consistent gradients in pixel space. Imagine a simple case where the latent map is shifted to the right or left by a few pixels—the VAE would not decode the same image with the same shift in pixel space. In the case of SDS combined with latent diffusion, the gradient at the latent space needs to pass through a VAE encoder composed of multiple convolutional layers and activation functions. As a result, simply keeping the gradient of a portion of the latent map unchanged does not ensure that the corresponding image regions will also receive an unchanged gradient. \n\n- **Second,** this limitation has already been highlighted in the original work of Chang et al. (2024) (see their Appendix E). The authors of the integral noise method in Chang et al. (2024) noted that their method does NOT perform well in latent diffusion models and provided three reasons for this. Could the authors clarify why integral noise fails in the temporal domain but succeeds in the 3D case using LDM?\n\n2. The paper does not address the limitations of the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. I recommend including a pseudo-algorithm to enhance comprehension. Is the noise regenerated after each batch of gradient updates? Does the computation of correlation occur solely within the same batch?\n\n2. Are noise particles not visible considered in the computation? If so, why not focus on visible particles, aligning more closely with the computation in $\\int$-noise [1]?\n\n3. There are missing experimental details as noted in weakness 3. How many prompts do you experimented with?\n\n[1] How I Warped Your Noise: a Temporally-Correlated Noise Prior for Diffusion Models (https://warpyournoise.github.io/)" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The proposed noising method establishes a correlation between camera views. The noising methods can generate i.i.d. Gassian noise correctly while maintaining correlation properities.\n\n2. The introduction of the *correspondence-aware gradient consistency loss* serves as a regularization technique to mitigate geometry artifacts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a method to enhance SDS-based 3D generation through 3D consistent noising and geometry-aware gradient warping. The authors presented an algorithm that could warp the gaussian noise at different camera view while maintaining Gaussian properity for one camera view. The 3D consistent noising aims to improve gradient consistency, while the geometry-aware gradient warping seeks to reduce geometric artifacts." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The motivation behind *3D consistent noising* lacks clarity. Although the authors reference its similarity to $\\int$-noise [1] for improving texture consistency in video generation, the discussion on its application in 3D generation is insufficient. This omission raises questions about how 3D consistent noising can enhance 3D generation specifically. The introduction of 3D consistent noising appears unrelated to the 3D score interpretation of SJC, raising concerns about its relevance and application within this context. Thus, I suggest the authors to provide a more detailed explanation of how 3D consistent noising specifically addresses challenges in 3D generation.\n\n2. The noising method is confined to point cloud (3D GS) representation, limiting its applicability to other forms such as NeRF or Mesh. Additionally, the algorithm appears to overlook occlusion; for instance, a Gaussian particle not visible from the current camera view may still influence noise computation, resulting in questionable correlations compared to surface warp (optical flow) as in $\\int$-noise [1].\n\n3. The experimental evaluation is weak, with qualititive results from both the baselines and proposed methods showing low quality (Fig. 6, 7, 11). The results strongly underperform SoTA implementations like ProlificDreamer (VSD) and LucidDreamer (ISM), undermining the validity of the user study. Furthermore, this paper is lack of quantitative metrics. For the CLIP score in Fig. 13, details on the number of prompts and seeds used in the experiments are missing, and a more thorough experimentation (with at least 25 prompts) is needed.\n\n4. The *correspondence-aware gradient consistency loss* remains difficult to grasp despite the explanations in Sec. 4.4. The rationale behind why sharp changes are likely artifacts needs clarification. I suggest the authors to provide a more intuitive explanation of why sharp changes are likely to be artifacts. Additionally, the ablation study in Fig. 8 fails to effectively illustrate its impact; more results with varied seeds or prompts would strengthen the argument." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024geometryaware,\ntitle={Geometry-aware Score Distillation via 3D Consistent Noising and Gradients},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=uyzkKPvVyS},\nnote={under review}\n}" }, "abstract": { "value": "Score distillation sampling (SDS), the methodology in which the score from pretrained 2D diffusion models is distilled into 3D representation, has recently brought significant advancements in text-to-3D generation. However, this approach is still confronted with critical geometric inconsistency problems such as the ``Janus problem''. We provide a novel insight into this problem, hypothesizing that the incorporation of 3D awareness into the 3D noising process and gradient distillation process may bring about enhanced consistency between gradients, leading to improved fidelity and geometric consistency. To achieve this, we propose a simple yet effective approach to achieve a 3D consistent, geometry-aware noising process, leveraging the advantages that 3D Gaussian Splatting possesses as an explicit 3D representation. Combined with our geometry-based gradient warping and our novel gradient dissimilarity loss, we demonstrate that our method significantly improves performance by addressing geometric inconsistency problems in text-to-3D generation with minimal computation cost and being compatible with existing score distillation-based models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion Models", "Score Distillation Sampling", "Text-to-3D Generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d2e3237d70da157f5de95b5e86a8e41cc390ad5a.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Geometry-aware Score Distillation via 3D Consistent Noising and Gradients" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
uz4QiNHB16
FLAIR: A Foundation Model for Grapheme Recognition in Ancient Scripts with Few-Shot Learning
main
Active
Foundation Model;Few-Shot Learning;Prototypical Networks;Encoder Network;Indus Valley Civilization Script;Omniglot Dataset
foundation or frontier models, including LLMs
3;3;3;5
2;5;5;4
1;2;1;3
1;1;1;2
1;2;1;2
3.5
4
1.75
1.25
1.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Thank you for your detailed review and for pointing out critical areas of improvement. Your feedback is highly valuable in refining our manuscript. Below, we respond to your comments and outline the revisions we will implement:\n\n1. Details on the IVC Dataset: We acknowledge that the current manuscript lacks sufficient detail about the IVC dataset. In response, we will provide a comprehensive description of the dataset's source, content, and annotation process. This will include details about how the dataset was curated from Parpola’s CISI volumes and Mahadevan’s seminal work, and how manual annotation was performed to ensure data quality.\n\n2. Functionality and Illustration of the ProtoSegment Model: We understand that the description of the ProtoSegment model may not have been sufficiently detailed. To address this, we will include a more thorough explanation of the model’s functionality, emphasizing how the integration of a segmentation encoder contributes to enhanced feature extraction. \n\n3. Sloppy Text and Language Quality: We appreciate your feedback on the language and writing quality. We will conduct a comprehensive revision of the manuscript to improve the clarity, coherence, and academic quality of the text. Specific instances, such as the noted section on page 3 (lines 147-148), will be rephrased for better readability​.\n\n4. Relevance of References: We will carefully review all references to ensure they contribute meaningfully to the content.\n\n5. Ethical Concerns Regarding Acknowledgment Section: Thank you for highlighting the potential ethical issue regarding the acknowledgment section. We will modify this section to comply with ICLR’s double-blind submission policy by removing or anonymizing any grant details to avoid revealing author identity.\n\nResponses to Specific Questions:\n\nReal Contribution of the Model: The key contribution of ProtoSegment lies in its unique integration of a segmentation encoder within the Prototypical Networks framework. This allows for more refined feature extraction and segmentation of input images, particularly benefiting tasks with limited data. We will expand the discussion on this novelty to clearly differentiate it from existing methods​.\n\nProcurement of the IVC Dataset: The IVC dataset was curated from sources such as Parpola’s CISI volumes and Mahadevan’s \"The Indus Script: Texts, Concordance and Tables.\"" }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Thank you for your detailed review and constructive feedback on our paper. We greatly appreciate your thoughtful comments, which provide valuable insights for enhancing the quality and impact of our work. Below, we address your points and outline our planned revisions:\n\n1. Scope and Relevance of the Topic: We understand that grapheme recognition in ancient scripts may appear niche, potentially limiting its perceived influence in broader OCR fields. We aim to show how techniques designed for complex, data-limited environments can contribute to advancements in general OCR and machine learning methods.\n\n2. Contribution and Novelty of the Model: The ProtoSegment model does build upon existing frameworks, but its key contribution lies in integrating a segmentation encoder that focuses on enhancing feature extraction for highly detailed and complex input images, such as those from ancient scripts. This innovation is tailored to handle the challenges posed by limited data, making it more than a simple application of existing methods. We will revise the methodology section to clarify and highlight this unique aspect​.\n\n3. Justification for CNN Architecture: The use of a basic CNN architecture was a deliberate choice to balance computational efficiency with performance, particularly in the context of few-shot learning where simplicity can lead to better generalization on small datasets. We will provide further justification for this choice and discuss its comparative advantages in low-data scenarios​.\n\n4. Clarification of Figure 1: We acknowledge that Figure 1 may be unclear. We will enhance this figure by improving its visual quality and adding a more detailed caption that explains each component and its role in the model architecture.\n\n5. Public Release of the Dataset: We understand the importance of dataset accessibility for reproducibility and wider academic impact. We plan to release the annotated dataset upon acceptance of the paper for transparency and facilitate further research in this field. \n\n6. Experimental Validation and Ablation Studies: Your point regarding the lack of ablation studies is well-taken. We will include an ablation study that dissects the contributions of individual components, such as the segmentation encoder and the CNN backbone, to provide empirical validation of their impact on performance." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Thank you for your detailed review and for providing valuable feedback on our paper. We appreciate your time and insights, which will help us enhance the quality of our manuscript. Below, we address your comments and outline how we will revise the paper:\n\n1. Clarifying Dataset Details: We acknowledge your concern about the lack of clarity regarding the dataset. To address this, we will add more comprehensive information about the creation, annotation, and characteristics of our custom Indus Valley Civilization (IVC) dataset. This will include details on the sources used, annotation tools and processes, and class distribution to provide transparency and better understanding.\n\n2. Novelty of Network Architecture: We understand that the novelty of the proposed ProtoSegment model may not have been sufficiently highlighted. While ProtoSegment builds on the existing Prototypical Networks framework, its key innovation lies in the integration of a segmentation encoder to enhance feature extraction in cases of limited data. This segmentation step allows the model to isolate and focus on meaningful visual features within each region, facilitating better discrimination between graphemes with subtle differences. Unlike traditional Prototypical Networks, which process entire images or use simpler feature extraction methods, ProtoSegment leverages this tailored segmentation to enhance its feature maps before prototype computation. We will revise the method section to more clearly articulate this novelty and its impact on the grapheme recognition task​.\n\n3. Paper Blindness Compliance: We appreciate your observation regarding the paper's anonymity. We will ensure that any potentially identifying information is removed or anonymized to comply with double-blind review standards.\n\n4. Writing Edits and Flow Improvements: To improve readability, we will:\n\n- Edit the paper to ensure a smoother flow of content, with better transitions between sections.\n- Enhance the explanation linking Figure 4 to the relevant text, making the connection clearer and easier to follow​.\n- Revisit the entire paper to standardize abbreviation definitions, ensuring terms like \"CNN\" are defined only once and referenced consistently thereafter​.\n\n5. Reference Formatting and Citation Issues: We will review and correct the formatting of references throughout the manuscript to ensure they are distinct from the main text and properly cited. \n\n6. Response to the Question on Dataset Annotation: To answer your question on dataset annotation, specifically, the dataset was annotated manually. The authors examined each image and with feedback from experts marked individual graphemes according to predefined classification criteria. The annotations followed Mahadevan's taxonomy of labels for the IVC dataset." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Thank you for your thorough review and valuable feedback on our submission. We appreciate your time and constructive comments, which will guide us in improving our manuscript. Below, we address each of your points in detail:\n\n1. Writing and Presentation Issues: We acknowledge that the manuscript contains issues related to incorrect symbols and formulas, confusing citation formats, and the use of lower-resolution figures. To address this:\n\n- We will revise all mathematical notations to ensure they are correct and properly defined within the text.\n- Citation formats will be standardized and checked to prevent blending with the main content, enhancing readability.\n- We will replace all blurry figures with higher-resolution images saved in vector format (PDF), ensuring clarity.\n\n2. Clarification of the Segmentation Encoder's Role: You raised an important question regarding the role of the segmentation encoder. The encoder is indeed a convolution-based encoder-decoder designed to isolate individual graphemes within complex input images. This isolation helps the model focus on relevant features and contributes to improved recognition performance. We will expand the explanation in Section 3.2 to highlight this functionality and its significance in ProtoSegment​.\n\n3. Explanation of MobileNet's Role: We appreciate your observation regarding the lack of clarity about MobileNet’s role in Figure 1. MobileNet is integrated as a component of the initial digitization step to pre-process and refine input character images before they are processed by ProtoSegment. This connection will be elaborated on to provide better context​.\n\n4. Explanation of K-way and N-shot Terms: We will add an explanation in the tables and main text to define K-way (the number of classes) and N-shot (the number of examples per class) for readers who may not be familiar with few-shot learning terminology​.\n\n5. Model Training and Experimental Clarification: To address the confusion regarding Figure 4, where the backward pass is shown pointing to the support and query samples:\n\n- We will revise the figure and description to accurately reflect the training flow and clarify that the support and query samples serve as inputs during the training episodes but are not updated as part of the backward pass.\n\n6. Analysis of Experimental Results: We understand your request for an explanation of why the 20-way results were worse than the 5-way. This discrepancy is primarily due to the increased difficulty of classifying among more classes, which impacts accuracy. We will provide a more detailed analysis in the experimental results section to discuss this phenomenon and its implications​." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The authors have provided details on the grant which funded their research in an acknowledgment section, this could reveal their identity. Hence I believe this is violating ICLR submission policy." }, "flag_for_ethics_review": { "value": [ "Yes, Other reasons (please specify below)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Seems the method described in this paper is just an off-the shelf algorithm - Could you specify what was the real contribution?\n\nWhere from did this IVC dataset was procured? \n\nWhat was the reason for putting the details of the grant in the acknowledgement section?? This is completely against ICLR submission policy as this might reveal the authors identity." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The only positive aspect of this article is the topic." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents \"FLAIR\", a foundational model designed for the grapheme recognition of the Indus Valley script, an ancient un-deciphered writing system. Recognizing the limited availability of labeled data, the authors leverage few-shot learning (FSL) through prototypical networks enhanced with a custom segmentation encoder called ProtoSegment." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(i) No details on IVC dataset. \n\n(ii) What is the functionality of the protosegment model is also not properly illustrated and hence the key contribution (if any at all ) also cannot be perceived. This part should have been aided with more illustrative diagrams. \n\n(iii) Sloppy text - for example in page 3 line 147-148. \n\n(iv) Extremely poor language and sentence formation. \n\n(v) irrelevant references - just for the sake of filling up the paper , for example Line 44 in page 1." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "As shown in Weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.\tFLAIR fills a critical gap in ancient script recognition, providing a versatile model not previously available in OCR or grapheme recognition.\n2.\tProtoSegment outperforms existing few-shot and deep learning methods, achieving higher accuracy in grapheme classification tasks across both datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces FLAIR, a few-shot learning model for recognizing graphemes from the undeciphered script of the Indus Valley Civilization (IVC). Utilizing prototypical networks and a specialized encoder, FLAIR excels at digitizing IVC seal graphemes, outperforming traditional method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe paper focuses on grapheme recognition in ancient scripts, which is a niche topic and represents a small subfield of OCR. This has weak influence on our scholar field, which makes this paper is not suitable for a top-tier conference like ICLR. Additionally, general OCR methods might also perform well on this dataset.\n2.\tThe proposed method largely relies on existing approaches (CNN backbone + Classifier head), merely applying the framework on your dataset. This raises concerns about the contribution and innovation of this work.\n3.\tThe method employs a very basic CNN architecture for the classification task, which seems outdated in the current era of large models. Moreover, referring to it as a \"foundational model\" appears somewhat exaggerated.\n4.\tFigure 1 is also quite unclear.\n5.\tFurthermore, will this paper release the dataset publicly? If not, the lack of innovation in your method significantly diminishes the paper's contribution to the academic community.\n6.\tThe experimental section lacks ablation studies to validate the components of your proposed method.\n7.\tAs shown in Table 1, the accuracy of your method and other state-of-the-art approaches has reached over 98%, even approaching 99%. In such cases of minimal improvement, it is difficult to determine whether the results stem from experimental variability or the enhancements offered by your method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "* How was the dataset annotated?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "* Applying deep learning techniques to scripts could be very beneficial to archaeologist and linguists to help study many undeciphered scripts. This could also be applied to other ancient scripts outside of scripts used in indux valley civilizations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper describes a method, FLAIR, to classify graphemes in ancient Indus valley civilization scripts. FLAIR adopts a few-shot learning approach to circumvent small datasets by introducing the protosegment model designed for images of graphemes. The authors evaluated their technique on an existing dataset OmniGlot as well as their custom Index valley civilization scripts dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Details about the dataset is not clear.\n* There is limited novelty in the proposed network architecture\n\n* The paper is not blinded\n* The writing of the paper needs edits. e.g.,\n * The flow of the paper is hard to follow. \n * It wasn’t easy to link Figure 4 to the text describing it\n * References are not correctly added throughout the paper\n * Abbreviations are not defined carefully (e.g., convolutional neural network -> CNN was defined 3 times)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "This paper does not involve any ethics issues." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. In the paper, all figure uses jpg or png format. The drawn image should be saved in pdf format before being inserted into the paper.\n2. In section 3.2 PROTOSEGMENT MODEL, there are a lot of errors in mathematical symbols and formulas. Some symbols appear out of thin air without explanation, which is not conducive to readers' understanding.\n3. What role does Deep Learning: MobileNet mentioned in Figure 1 play in the entire task? The article does not explain it clearly.\n4. In Tables 1 and 2, it should be explained clearly what K-way and N-shot refer to, as this may be confusing to readers who are not familiar with Prototypical Learning.\n5. There are errors in the reference citation format in the paper, and the content of the references is mixed with the main text, which is not conducive to the reader's reading experience.\n6. Why does the Backward Pass in Figure 4 point to the Support Sample and Query Sample from the network? Aren't these samples extracted from the dataset and cannot be updated?\n7. In Table 1, why is the result of 20-way worse than that of 5-way? I hope to see the explanation of this experimental phenomenon." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper uses meta-learning and few-shot learning to perform classification and recognition tasks on a small sample of Indus Valley Civilization. The article improves Prototypical Networks and proposes ProtoSegment, which achieves state-of-the-art performance on the IVC Dataset. The findings of this paper may contribute to new discoveries and interpretations of ancient texts of the Indus Valley Civilization." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The article proposes FLAIR, which uses a few-shot learning method to recognize individual characters from a limited set of Indus scripts. FLAIR uses prototypical networks and ProtoSegment to extract complex features in grapheme images to achieve recognition of Indus script. FLAIR was pre-trained on the Omniglot dataset and then migrated to the recognition and classification tasks of the IVD dataset, achieving state-of-the-art results. FLAIR's ability to perform efficient feature extraction from small samples and its potential adaptability to unseen symbols make it a powerful tool not only to digitize and analyze ancient scripts but also potentially aid in their decipherment." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper has major writing problems. For example, there are a large number of incorrect symbols and formulas in the text, the citation format of references in the text is incorrect and confusing, the pictures in the text are blurry, and the model training process is not clearly explained, which will cause great confusion for readers who are not familiar with Prototypical Networks. In terms of innovation, although the paper proposes a relatively novel task, there are few improvements to the methods used. The paper only adds a segmentation encoder to the original Prototypical Networks. If I understand correctly, the segmentation encoder should be a convolution-based encoder-decoder, but I don’t understand what specific role this network can play and why it can segment images into individual graphemes. In the experimental part, the article lacks qualitative analysis of the experimental results. Why is there such a result? What caused the difference in the experimental results? What conclusions can we draw from the experimental results? I think these should be added to the paper." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Updating prototypical networks and applying few-shot learning for Indus Valley script recognition with Omniglot dataset testing and validation" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024flair,\ntitle={{FLAIR}: A Foundation Model for Grapheme Recognition in Ancient Scripts with Few-Shot Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=uz4QiNHB16},\nnote={under review}\n}" }, "abstract": { "value": "The Indus Valley Civilization (IVC) left behind an undeciphered script, posing a significant challenge to archaeologists and linguists. This paper introduces FLAIR, a few-shot learning approach that aims to establish a foundational model for recognizing and identifying individual graphemes from the limited available Indus script. As a foundational model, FLAIR is designed to be versatile, supporting multiple potential applications in script recognition and beyond. It leverages prototypical networks combined with a modified proposed encoder network for segmentation, ProtoSegment to extract intricate features from the grapheme images. We evaluate FLAIR’s ability to generalize from minimal data using IVC grapheme classification tasks and further experiment with pre-trained Omniglot models for fine-tuning. Additionally, we simulate real-world data scarcity by intentionally restricting training data on the Omniglot dataset. Our experiments demonstrate FLAIR’s accuracy in digitizing and recognizing Indus Valley seal graphemes, outperforming traditional machine learning classification approaches. These results underscore FLAIR's potential not only for the digitization of ancient scripts with limited labeled datasets but also for broader applications where data is scarce. FLAIR’s success in grapheme recognition highlights its promise as a foundational model capable of extending to other undeciphered writing systems, thereby contributing to the integration of classic scientific tools and data-driven approaches." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Foundation Model", "Few-Shot Learning", "Prototypical Networks", "Encoder Network", "Indus Valley Civilization Script", "Omniglot Dataset" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5c9a2ba9199b47060126dbfc66e13b3aa3d9d84a.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "FLAIR: A Foundation Model for Grapheme Recognition in Ancient Scripts with Few-Shot Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
uzKG83YJ3t
BroadWay: Boost Your Text-to-Video Generation Model in a Training-free Way
main
Withdraw
generative models;text-to-video generation;video quality enhancement
generative models
Jiazi Bu;Pengyang Ling;Pan Zhang;Tong Wu;Xiaoyi Dong;Yuhang Zang;Yuhang Cao;Dahua Lin;Jiaqi Wang
~Jiazi_Bu1;~Pengyang_Ling1;~Pan_Zhang1;~Tong_Wu2;~Xiaoyi_Dong1;~Yuhang_Zang1;~Yuhang_Cao3;~Dahua_Lin1;~Jiaqi_Wang1
3;3;5;6;6
5;5;5;4;5
2;1;3;3;3
2;2;3;3;3
3;3;3;3;4
4.6
4.8
2.4
2.6
3.2
-0.516047
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Thanks for your comments, we decide to make a thorough revision and resubmit this manuscript." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please refer to the weaknesses part." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed method offers a practical advantage by boosting T2V model performance without retraining, making it suitable for integrating with existing models.\n\n2. The TSG and FME components are well-designed to tackle specific weaknesses in T2V generation, providing both structural coherence and dynamic motion in synthesized videos.\n\n3. The authors validate their methods on multiple backbones like AnimateDiff and VideoCrafter2 illustrating its generalizability across various models, including potential applications in image-to-video tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a training-free, plug-and-play approach called BroadWay to enhance text-to-video (T2V) generation models. The proposed method is designed to address common challenges including structural artifacts, temporal inconsistencies, and limited motion dynamics in T2V models, by manipulating temporal attention maps. The two main components are Temporal Self-Guidance (TSG), which improves structural plausibility and consistency by reducing disparities between attention maps across decoder blocks, and Fourier-based Motion Enhancement (FME), which amplifies motion by increasing the energy in the attention maps through frequency manipulation. Experimental results demonstrate BroadWay's effectiveness across multiple T2V backbones, showing substantial improvements in video quality, structural consistency, and motion richness without additional training or memory cost." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The hyper-parameters, especially \\alpha (for TSG) and \\beta (for FME), may need to be manually tuned for different backbones, reducing the plug-and-play convenience for some users. For example, the \\beta parameter varies greatly for different T2V base models (1.5 for AnimateDiff and 10.0 for VideoCrafter2). Is there any guidance for this? The author should provide a sensitivity analysis or guidelines for selecting these parameters across different backbones. This would help users more easily apply the method to various models.\n\n2. All the T2V backbones are based on the U-Net structure with an interleaved spatial-temporal attention module. How about the model with DiT, as well as full 3D attention? It is encouraged to discuss the potential of the proposed method when applying to models with different architectures like DiT or full 3D attention.\n\n3. It is highly encouraged to use VBench to evaluate the proposed method since this metric could evaluate the jittery and motion magnitude of the videos.\n\n4. UNet should be U-Net." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How is the hyper-parameter \\tau decided? How does it affect the performance of the method?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The analysis of the temporal attention map is interesting and provides a lot of insights.\n- The paper reads well and is easy to follow.\n- The ideas of both components are interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed a training-free method for improving both temporal consistency and motion intensity of video diffusion models. The first component is temporal self-guidance. The analysis shows that disparities between temporal attention maps across different blocks are related to the structure coherence of generated videos. To create videos with better structure coherence and temporal consistency, the temporal attention map of the first upsampling block is added to the attention maps of subsequent blocks. The second component is frequency spectrum re-weighting. The attention maps are decomposed into a high-frequency part and a low-frequency part. To increase the motion intensity, the high-frequency part will be multiplied by a scalar greater than 1." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- There are only ~10 example video results for each model in the supplemental material.\n- For different models (AnimateDiff, VideoCrafter2), the default hyper-parameters are quite different. Users might need heavy manual tuning for these hyper-parameters.\n- Ablation experiments of different values of alpha and beta are missing.\n- VBench is a standard video generation benchmark, but the paper doesn't use the metrics from VBench." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Will the method be extended to DiT-based video generation.\n- Several relevant training-free enhancement methods, such as Freeu and VideoElevator, also adopted Fourier-based Enhancement. More discussion and comparison is suggested to clarify their difference.\n- Albeit training-free is convenient for use, training usually is beneficial to performance. Some discussion on this issue is preferred." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "+ A training-free method BroadWay to improve the quality of text-to-video generation. \n+ Temporal Self-Guidance and Fourier-based Motion Enhancement. \n+ Effectiveness on several existing video generation models such as AnimateDiff and VideoCrafter2." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presented a training-free method BroadWay to improve the quality of text-to-video generation. BroadWay involves two major components, Temporal Self-Guidance and Fourier-based Motion Enhancement. Experiments show its effectiveness on several existing video generation models such as AnimateDiff and VideoCrafter2." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Will the method be extended to DiT-based video generation.\n- Several relevant training-free enhancement methods, such as Freeu and VideoElevator, also adopted Fourier-based Enhancement. More discussion and comparison is suggested to clarify their difference.\n- Albeit training-free is convenient for use, training usually is beneficial to performance. Some discussion on this issue is preferred." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In conclusion, considering the weaknesses mentioned above, this paper cannot be considered a well-prepared version for ICLR. Therefore, I lean towards rejecting this manuscript." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The authors demonstrate the effectiveness of BroadWay on various T2V backbones and I2V tasks, showing its versatility and potential for widespread adoption.\n2. This paper is well-organized, and the authors provide a clear explanation of the proposed method, making it easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a training-free method called BroadWay to improve the quality of text-to-video (T2V) generation models without introducing additional parameters, memory, or sampling time. BroadWay consists of two components: Temporal Self-Guidance and Fourier-based Motion Enhancement. The former improves structural plausibility and temporal consistency by reducing the disparity between temporal attention maps across decoder blocks. The latter enhances motion magnitude and richness by scaling the high-frequency components of the temporal attention maps. The authors demonstrate the effectiveness of BroadWay on various T2V backbones, including AnimateDiff and VideoCrafter2, and show that it can also be applied to image-to-video (I2V) tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tFirstly, this paper focuses on methods that do not require training to improve the performance of video diffusion models, which has been explored by existing works, including FreeInit [1], UniCtrl [2], and I4VGen [3]. However, these works are not adequately discussed in this paper. In fact, [1] also explores the motion degree of generated videos from the perspective of high and low frequency decoupling, and [2] studies the attention layer of video diffusion models. However, the omission of these works means that this paper cannot be considered a well-prepared version for ICLR. This also affects the evaluation of the novelty of this paper.\n2.\tExperiments. In fact, there are many benchmarks focused on video generation, such as T2V-CompBench [4] and VBench [5], which provide reliable evaluation metrics. However, this paper hardly provides quantitative evaluation results. For the evaluation of the motion degree of generated videos, the \"Dynamic Degree\" in VBench provides a reference.\n3.\tAblation experiments. Providing only visualization results in Figure 10 is not convincing, and more results, including quantitative results, are necessary. Moreover, in line 215, the authors should provide more discussion on the selection of up_blocks.1.\n4.\tIn the limitations section, the authors also mention that the proposed method is parameter-sensitive, and it would be better to provide more experimental results on this issue.\n5.\tI have doubts about the results in Table 2(b). For AnimateDiff, when LoRA is not introduced, the generated videos are temporally chaotic, corresponding to a higher motion degree. Therefore, FreeInit and others introduce Realistic Vision V5.1 LoRA as the video baseline, which is not mentioned in this paper. Therefore, the authors need to provide more explanations for Table 2.\n\n[1] FreeInit: Bridging Initialization Gap in Video Diffusion Models, Wu et al., ECCV 2024\n[2] UniCtrl: Improving the Spatiotemporal Consistency of Text-to-Video Diffusion Models via Training-Free Unified Attention Control, Chen et al., arXiv 2024\n[3] I4VGen: Image as Free Stepping Stone for Text-to-Video Generation, Guo et al., arXiv 2024\n[4] T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generationrk, Sun et al., arXiv 2024\n[5] VBench: Comprehensive Benchmark Suite for Video Generative Models, Huang et al., CVPR 2024" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethics review needed." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I am concerned about the reasoning and justification for the proposed method.\n a. In L83, the authors claim temporal inconsistency and motion artifacts are related to temporal attention map disparity. However, why such disparity lead to artifacts? In Sec 4.2, the authors also modify and amplify several frequency components, why such operation not lead to disparity and degrades results?\n b. In L87~89, the meaning of `energy` is not defined and unclear. In the second row of Fig. 2, the background region slightly translates, but shows a large response similar to foreground cat, why is that? Is this `rich motion`?\n c. In L210, why to choose up blocks.1 as anchor? Does temporal attention maps in different blocks share similar meaning and can be computed as in Equ (3)? \n d. In Sec 4.2, why the authors choose to modify in frequency space? The artifacts or motion inconsitency do not necessary happen in frequency domain.\n e. In Equ (5), A is a 3D tensor, it is not clear how to do the fourier transform. Does that mean a 1D FFT along the last axis?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper tries to solve an important question of motion quality in video generation. The authors propose a new method to modify temporal attention maps using self-guidance and frequency spectrum re-weighting. The method itself seems reasonable and interesting. \n2. The results shown in the paper seem promising and superior to baseline model.\n3. The paper writing is clear and very easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces BroadWay, a training-free method to improve T2V video quality, especially structural plausibility, temporal and consistency. BroadWay consists of two components: Temporal Self-Guidance, which enhances structural plausibility and temporal consistency by reducing disparities in temporal attention maps, and Fourier-based Motion Enhancement, which amplifies motion by increasing the energy in the maps. Experiments and results seem promising." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. My primary concern lies with the reasoning and justification for the proposed method. Many of the descriptions and claims in the paper appear to be rather ad hoc, lacking detailed analysis or theoretical proof to substantiate their validity.\n\n2. The authors clearly described their method. However, it is not very clear for the reason and hyper-parameter choice for each step.\n I think there must be some explanation under these steps, and it would be much better if more discussion and explanations are provided.\n\n3. The results in the main paper seem promising. However, the videos in the supplementary material seem to be slightly better, especially in consitency and motion magnitude, but still contain artifacts. I wonder maybe the proposed temporal attention map modifications are not essential to this problem?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "BroadWay provides a training-free and plug-and-play option to enhance the overall quality of current T2V backbones." }, "_bibtex": { "value": "@misc{\nbu2024broadway,\ntitle={BroadWay: Boost Your Text-to-Video Generation Model in a Training-free Way},\nauthor={Jiazi Bu and Pengyang Ling and Pan Zhang and Tong Wu and Xiaoyi Dong and Yuhang Zang and Yuhang Cao and Dahua Lin and Jiaqi Wang},\nyear={2024},\nurl={https://openreview.net/forum?id=uzKG83YJ3t}\n}" }, "abstract": { "value": "The text-to-video (T2V) generation models, offering convenient visual creation, have recently garnered increasing attention. Despite their substantial potential, the generated videos may present artifacts, including structural implausibility, temporal inconsistency, and a lack of motion, often resulting in near-static video. In this work, we have identified a correlation between the disparity of temporal attention maps across different blocks and the occurrence of temporal inconsistencies. Additionally, we have observed that the energy contained within the temporal attention maps is directly related to the magnitude of motion amplitude in the generated videos. Based on these observations, we present BroadWay, a training-free method to improve the quality of text-to-video generation without introducing additional parameters, augmenting memory or sampling time. Specifically, BroadWay is composed of two principal components: 1) Temporal Self-Guidance improves the structural plausibility and temporal consistency of generated videos by reducing the disparity between the temporal attention maps across various decoder blocks. 2) Fourier-based Motion Enhancement enhances the magnitude and richness of motion by amplifying the energy of the map. Extensive experiments demonstrate that BroadWay significantly improves the quality of text-to-video generation with negligible additional cost." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Jiazi_Bu1", "~Pengyang_Ling1", "~Pan_Zhang1", "~Tong_Wu2", "~Xiaoyi_Dong1", "~Yuhang_Zang1", "~Yuhang_Cao3", "~Dahua_Lin1", "~Jiaqi_Wang1" ] }, "authors": { "value": [ "Jiazi Bu", "Pengyang Ling", "Pan Zhang", "Tong Wu", "Xiaoyi Dong", "Yuhang Zang", "Yuhang Cao", "Dahua Lin", "Jiaqi Wang" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "generative models", "text-to-video generation", "video quality enhancement" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "bu|broadway_boost_your_texttovideo_generation_model_in_a_trainingfree_way" }, "pdf": { "value": "/pdf/5c556dd169d39408a8439dd58e08a5983703161d.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/c66068bfdbd36567c6250eae8c42f33c35fccb3b.zip" }, "title": { "value": "BroadWay: Boost Your Text-to-Video Generation Model in a Training-free Way" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
uzz3qAYy0D
VideoShield: Regulating Diffusion-based Video Generation Models via Watermarking
main
Active
video;watermarking;tamper localization
alignment, fairness, safety, privacy, and societal considerations
3;3;3;6;8;8
5;4;5;4;4;5
2;2;2;3;3;3
2;2;2;3;4;3
3;2;1;3;3;3
5.166667
4.5
2.5
2.666667
2.5
-0.220564
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1) Results from Table 1 shows that the proposed method is only marginally better than RivaGAN w/o real images conditioning. Did you check RivaGAN with the same setup as the last row? \n2) Why do you compare only to SVD in Table 2? \n3) The authors appeal to decreased visual quality of videos generated with the existing watermarking methods, but did not organise subjective study to show that their method outperforms the others. Video quality metrics are known to have limited capabilities to estimate AIGC. Subjective comparison is required in this work, can you provide one?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1) The proposed method does not require additional training, which could simplify integration with existing diffusion models.\n2) The framework’s ability to detect tampering both within frames and across the sequence" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the need for the control of integrity and misuse of AI Generated Content. The usual approach is to use watermarks for video domain, but they are underdeveloped and have a post-processing manner, which results in video quality degradation. The authors propose a novel way of watermarking images while generating it (VideoShield). Their method is training-free and works with diffusion-based video models. Authors extract the watermark from the images using DDIM Inversion." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) Notation T_temp for a threshold in tampering is a bit confusing, considering T_p is a tensor.\n2) No introduction for chacha20\n3) No formula for calculating video quality is given \n4) Figure 2 provides only abbreviations in the legend\n5) Over all paper, formulas and tables seem to have decreased font. Moreover, some tables overlap the text (e.g. Table1). This may be a reason for a possible desk rejection." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can the author provide more comparative details on how VideoShield compares to other watermarking frameworks or post-processing methods in terms of tamper localization accuracy and robustness?\n2. Can the author share the computational performance of VideoShield, especially regarding the running time of different video resolutions or models?\n3. Will the author consider testing the robustness of VideoShield under other types of distortion, such as extreme video compression, frame rate variations, or color adjustments? These additional distortions are common in real-world applications and can enhance confidence in VideoShield's resilience.\n4. The results indicate that VideoShield has a certain dependence on video quality. Does the performance of VideoShield still vary due to resolution or model complexity? If so, can the author provide more insights or data on these factors?\n5. Can VideoShield adapt to image tampering localization?\n6. Is the watermark capacity of 6 512 bits suitable for all video resolutions, or is the number of bits that can be embedded flexible based on the characteristics of the video?\n7. Placing visual content in the main text seems better." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper introduces an innovative approach to watermarking in diffusion-based video generation models by embedding watermarks during the generation process, which diverges from traditional post-processing methods. Compared to image watermarking, this is a less explored field. The originality of embedding watermarks during the generation process, as well as the novel dual tampering localization, is a meaningful supplement to this field.\n2. The technical methods are rigorous and fully described. This paper uses DDIM inversion to provide a solid foundation for watermark embedding and extraction without affecting video quality. The extensive experimental evaluation of multiple T2V and I2V models strongly demonstrates this method's robustness, flexibility, and effectiveness. The author's detailed analysis of different watermark extraction and localization scenarios further strengthens the contribution of this article to this field.\n3. This paper is well-organized and provides a clear understanding of motivation, methods, and experimental setup.\n4. VideoShield's training-free and high-fidelity watermarking method provides a reliable and efficient solution for generating watermarks in video models. This paper addresses a highly relevant issue - ensuring the integrity of content in videos generated by artificial intelligence." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper \"VideoShield: Regulating Diffusion-Based Video Generation Models via Watermarking\" presents VideoShield, a novel framework for embedding watermarks in diffusion-based video generation models, including both text-to-video (T2V) and image-to-video (I2V) models. The paper addresses the need for content control and authenticity verification in AI-generated video content, focusing on integrating watermarks during the video generation process to prevent quality degradation. Key contributions include In-Generation Watermark Embedding, Tamper Localization for Videos, and Watermark Extraction and Robustness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although this article introduces relevant watermarking and tampering localization methods, there is no comparative analysis with other video generation or post-processing watermarking methods. For example, including baselines for time tampering localization or using alternative methods to evaluate the robustness of specific types of distortions.\n2. VideoShield is positioned as a non-training and efficient framework, but this paper lacks specific comparisons with baselines regarding time/computational complexity, such as runtime.\n3. Although these experiments covered various distortions, they did not explore broader real-world attack scenarios, including testing with denser video compression techniques, color distortion, or frame rate changes, which could further validate the robustness of VideoShield in various real-world environments. In addition, analyzing the performance of VideoShield under adversarial conditions where attackers actively attempt to bypass watermarks could be an interesting solution.\n4. The author can further discuss the limitations by listing the performance of VideoShield under different video quality and generation settings. This method seems to perform better on higher-quality video output, but further discussion on factors that may affect watermark integrity, such as video resolution or content complexity, will further demonstrate the effectiveness boundary of VideoShield.\n5. This article briefly introduces the adaptability of image watermarking, but can VideoShield adapt to image tampering localization scenarios?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "NA" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper is well-written.\n- This paper proposes a novel scheme for proactive video forensics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces VIDEOSHIELD, a novel watermarking framework for diffusion-based video generation models. By embedding the watermark directly during the generation process, VIDEOSHIELD eliminates the need for additional training, providing a cost-effective solution for safeguarding generated videos. Additionally, the model includes a proactive tamper detection mechanism. Using template bits derived from the watermark, the authors enable both temporal and spatial localization of tampering within the video." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe proposed model may not entirely align with watermarking application scenarios. In its context, verification requires the recipient to compare the watermarked video against the original video's bit template. This requires the verifier to either have access to the original video to generate the bit template using the same encryption method, or to obtain the bit template directly from the video creator. Such requirements differ from standard watermark extraction practices, where verifiers typically can extract watermark information without needing the original video.\n\n2.\tThe model uses the CHACHA20 stream cipher to generate template bits, requiring the algorithm to use the same key with different nonces for each encryption. Does this mean that a unique random number needs to be set for each video? If so, how does the method handle the storage of a large number of video-to-key mappings? Additionally, it’s unclear whether the primary goal of this approach is to protect the model itself or the generated videos.\n\n3.\tThe proposed method uses watermarking for proactive tamper detection; however, the baseline experiments in the paper compare it primarily with passive detection methods (e.g., MVSS-Net, HiFi-Net), which may not be entirely appropriate. It would be more suitable to compare the approach with other active tamper detection methods. Below are references to such methods for consideration:\n[1] Zhang X, Li R, Yu J, et al. Editguard: Versatile image watermarking for tamper localization and copyright protection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 11964-11974.\n[2] Zhou Y, Ying Q, Wang Y, et al. Robust watermarking for video forgery detection with improved imperceptibility and robustness[C]//2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2022: 1-6.\n\n4.\tThere are formatting issues: the bottom line of Table 1 overlaps with the text below, and the font size in Table 3 is too small, making it difficult for readers to follow." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The framework looks heavy, how about the computational overload? \nThe relationship with \"Gaussian Shading\" needs to be further clarified." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The topic is interesting. Using DDIM Inversion, the video can be reversed to its original watermarked noise. The paper is written and organized well. Experiments are plenty and convincing. The ablation study verified the contribution of each design towards the whole framework." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a VideoShield, which embeds watermarks directly during video generation. It maps watermark bits to template bits, which are then used to generate watermarked noise during the denoising process. Template bits allow precise detection for potential spatial and temporal modification. Extensive experiments demonstrate its effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "No obvious drawbacks are found. However, typesetting needs to be improved, such as keeping the same or similar text size of the table as the text, no overlapping between table and text." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Address the weakness, especially the novelty issue." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper introduces a Gaussian Shading watermark embedding method that does not require training, thereby avoiding additional computational overhead.\n2. Unlike previous methods that focus solely on spatial tampering localization, VIDEOSHIELD takes into account both temporal and spatial tampering localization during the tampering detection process. The paper introduces Hierarchical Spatial-Temporal Refinement, which enables the generation of more detailed masks for tampering localization, significantly improving the accuracy of the detection." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The watermark can actively identify generated content and is currently the primary method used for detection and attribution in diffusion models. However, there has been no watermarking method specifically addressing video generation models based on diffusion models. This paper proposes VIDEOSHIELD to fill this research gap. VIDEOSHIELD introduces a Gaussian Shading watermark embedding method that does not require additional training. In addition to its basic watermarking functionality, VIDEOSHIELD also acts as a fragile watermark, enabling both temporal (across frames) and spatial (within individual frames) tampering localization. Furthermore, it introduces Hierarchical Spatial-Temporal Refinement to enhance accuracy. Experimental results demonstrate that VIDEOSHIELD performs effectively on both T2V and I2V diffusion models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The innovation presented in this paper is moderate. It extends Gaussian Shading from images to videos and further explores its potential as a fragile watermark. \n2. The presentation of the text has significant issues, particularly in Section 3.4, \"TAMPER LOCALIZATION.\" In this section, the extensive use of symbols due to the inclusion of formulas creates obstacles to understanding the paper's content. The frequent reuse of subscripts p and q makes reading quite difficult; the matrix subscript notation is inconsistent, such as in C_(p(q=M(p))), which is recommended to be changed to C_(p,M(p)). Additionally, the comparison bits matrix Cmp could be written as CMP to standardize with other matrix symbols and to avoid confusion with the subscript p. In line 334, the letter m is used both for the variable and for watermark information, which may lead to confusion. After reviewing the main text, I found that in Supplementary Material A3.1, the authors provide a simple example of the process. I strongly recommend including Figure 7 in the main text to aid reader comprehension. Furthermore, there is some overlap between Table 1 and the Datasets section.\n3. In my view, the authors have not addressed an important issue regarding the sequence of watermark attribution and tampering localization. The paper does not demonstrate the accuracy of the watermark after spatial tampering. If we assume that a video undergoes spatial tampering and the tampered areas are detected through localization, but the watermark attribution fails, we cannot conclude that a third party maliciously altered the generated video. This raises the question of whether the watermark has become ineffective in such a scenario." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In Table 1 and other tables, the metric \"Video quality\" is mentioned. How is it specifically calculated? The authors mentioned it is the average of a series of metrics; if possible, could you provide the specific values for each metric?\n- Currently, the VideoShield method can hide 512 bits. What is the upper limit for watermark capacity?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- VideoShield leverages the unique properties of active watermarking, innovatively combining the model watermarking task with the tampering localization task, demonstrating significant performance advantages.\n- This method requires no training and follows a zero-shot paradigm, making it easy to reproduce and validate its performance.\n- This method is based on the properties of diffusion and Gaussian distribution, allowing it to be generalized to other diffusion-based models, such as text-to-image models, exhibiting good generalization capabilities." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This article proposes a zero-shot video generation model with active watermarking for video copyright protection and tampering localization. Based on the DDIM inversion process, the watermark information hidden in the initial Gaussian noise can be detected again from the generation results, and this active watermark is robust against video tampering, image degradation, and other attacks. Furthermore, for the task of video tampering localization, the authors have designed a detection method that localizes tampering in both temporal (frame position) and spatial (image information) aspects. Experiments have proven the performance advantages of VideoShield." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The threshold and hyperparameter settings in the method section are mostly the result of manual searches. The authors may consider exploring how these thresholds could be adaptively searched in the future." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024videoshield,\ntitle={VideoShield: Regulating Diffusion-based Video Generation Models via Watermarking},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=uzz3qAYy0D},\nnote={under review}\n}" }, "abstract": { "value": "Artificial Intelligence Generated Content (AIGC) has advanced significantly, particularly with the development of video generation models such as text-to-video (T2V) models and image-to-video (I2V) models. However, like other AIGC types, video generation requires robust content control. A common approach is to embed watermarks, but most research has focused on images, with limited attention given to videos. Traditional methods, which embed watermarks frame-by-frame in a post-processing manner, often degrade video quality. In this paper, we propose VideoShield, a novel watermarking framework specifically designed for popular diffusion-based video generation models. Unlike post-processing methods, VideoShield embeds watermarks directly during video generation, eliminating the need for additional training. To ensure video integrity, we introduce a tamper localization feature that can detect changes both temporally (across frames) and spatially (within individual frames). Our method maps watermark bits to template bits, which are then used to generate watermarked noise during the denoising process. Using DDIM Inversion, we can reverse the video to its original watermarked noise, enabling straightforward watermark extraction. Additionally, template bits allow precise detection for potential spatial and temporal modification. Extensive experiments across various video models (both T2V and I2V models) demonstrate that our method effectively extracts watermarks and detects tamper without compromising video quality. Furthermore, we show that this approach is applicable to image generation models, enabling tamper detection in generated images as well." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "video", "watermarking", "tamper localization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/7a72970c782630a628e0cd92de170187e69e2e99.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "VideoShield: Regulating Diffusion-based Video Generation Models via Watermarking" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v0FzmPCd1e
Selective Attention Improves Transformer
main
Active
selective attention;attention;transformer;llm;language model
foundation or frontier models, including LLMs
3;3;5;8
4;3;3;4
2;2;3;3
3;3;3;4
3;1;3;3
4.75
3.5
2.5
3.25
2.5
0.366508
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Have you considered evaluating Selective Attention on other architectures, such as encoder or encoder-decoder transformers? If so, could you share any preliminary results or insights?\n\n2. How sensitive is the performance of Selective Attention to the memory budget per layer? Could you offer guidelines on tuning this budget for tasks beyond language modeling?\n\n3. Have you tested Selective Attention on tasks other than language modeling, such as question answering or text summarization? How might the approach handle tasks with varied context and attention needs?\n\n4. Could you provide further comparisons between Selective Attention and other efficient attention methods, such as sparse or linear attention, to clarify the unique advantages of Selective Attention?\n\n5. How does context pruning with Selective Attention affect tasks requiring long-range dependencies? Are certain types of tokens or information more prone to being pruned?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. **Efficient Memory Management**: The Selective Attention mechanism effectively prunes unneeded tokens, significantly reducing memory usage during inference without degrading model performance. This efficiency gain is particularly valuable for scaling transformers in resource-constrained environments.\n2. **No Additional Parameters**: Selective Attention operates without introducing new parameters or significantly increasing computational overhead, which preserves the simplicity of the transformer architecture.\n3. **Theoretical Motivation and Experimental Rigor**: The paper provides a thorough theoretical motivation for selective attention, supporting it with extensive experimental results on benchmarks like HellaSwag and C4, which demonstrate the effectiveness of the approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a parameter-free modification to the transformer attention mechanism called Selective Attention, which reduces the attention to irrelevant or unneeded tokens in a sequence. This approach enhances performance in language modeling tasks by reducing memory usage and computational costs without compromising model quality. Experiments show that transformers with Selective Attention achieve comparable performance to larger models with traditional attention mechanisms, providing improved efficiency and effectiveness across various model sizes and tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Limited Scope of Model Architectures**: The experiments are primarily conducted on decoder-only transformer models. Further analysis is needed to verify if Selective Attention can similarly benefit encoder models or encoder-decoder models used in other tasks, such as translation or summarization.\n2. **Potential Over-Reliance on Hyperparameter Tuning**: Selective Attention’s performance may depend on optimal memory budget settings per layer, which could complicate deployment in different tasks or models. Although a memory reduction process is described, further tuning could be required in practical applications.\n3. **Dataset and Task Diversity**: While Selective Attention shows improvements in language modeling tasks, testing it on a wider range of tasks (e.g., text generation or long document understanding) would strengthen the case for its generalizability and adaptability to diverse applications.\n4. **Comparison with Similar Methods**: The paper could benefit from a more detailed comparison with other recent efficient attention mechanisms, such as sparse attention or adaptive attention approaches, to highlight any relative advantages Selective Attention may have." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- In the selection attention computation, why use a single head’s logits to mask all heads? Did you try keeping each head separate (i.e. using the logits head i for the selection mask of head i)?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Given the simplicity of the proposed approach and the reported performance gains, it seems that selective attention could be a significant addition to the transformer architecture if it is further validated.\n- Even without the performance gains, the efficiency gains (particularly without having to modify the pretraining loss) are quite relevant and make Selective Attention look like a viable alternative to other efficient attention mechanisms." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces *Selective Attention*, a parameter-free modification to the standard attention mechanism in transformers which helps reduce the attention given to irrelevant elements in the context. To do so, they introduce a *selection mask* in the attention computation to mask “irrelevant” token, and propose using the previous tokens’ attention on other tokens (on a specific head) as masking for future attention computations. On language modelling on C4, results show that transformers equipped with Selective Attention can achieve comparable performance to standard transformers with twice the attention heads and parameters, and slightly better performance on the downstream task of HellaSwag.\n\nThe authors then propose a method that uses the selection mask to prune elements from the attention's context buffer, reducing memory and computational demands during inference, and show that it can lead to significant memory saving (up to 47x for large context sizes) while preserving performance of non-selective attention transformers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The main problem of this paper is weak experimental validation. The authors only show gains in the language modelling task (using the relatively noisy and deprecated C4 dataset) and on a single downstream task (where they show smaller gains). They don't compare to existing pretrained models or other efficient attention mechanisms. While the (limited) experimental results are promising, they are not enough to validate the approach. I suggest replicating the recipe of an existing (state-of-the-art) pretrained LLM, replacing the attention mechanism with Selective Attention. Additionally, in the current LLM area, more downstream tasks need to be tested to validate the approach. For the efficiency section, the authors should compare with other efficient attention mechanisms.\n- The presentation paper could also be better: it uses a whole page for a single figure and further page for intuition, both of which could be much shorter. It then moves decently important plots to the appendix. I suggest restructure the paper to give more space to the experimental results and comparison to other methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "None" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The motivation is clear, and the execution of the idea to not to tokens that are already attended to is clever and clean\n- The proposed method is easy to implement, adds no parameters, and saves memory overhead\n- Interesting results and analysis" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper argues that, for transformer-based language models, selectively attending to a subset of the tokens in the context can improve the model's quality. Motivated by this intuition, the paper proposes a new attention technique, where if a token has already received high attention weights by previous tokens, it means that its content has already been \"absorbed\" in the model's representations, and therefore it should receive less attention weights onward. The paper proposes an implementation for this idea by reusing the attention logits of one of the attention heads, and thereby adding no additional parameters and only a small amount of computational overhead. Further, the proposed method also allows for pruning some tokens from the context when they will never be attended to again, which leads to memory saving.\n\nExperiments with a 100M transformer based LM trained on the C4 dataset suggests that the proposed approach achieves promising results on language modeling perplexity and one downstream task (HellaSwag)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- My main concern is about the limited empirical evaluation where only one downstream task is considered. As the paper argues, \"different tasks have different requirements,\" it is crucial to explore whether selective attention is broadly applicable by evaluating it on a diverse set of tasks with different requirements. This can be complemented with synthetic evaluations that might require the model to store the entire sequence, e.g., counting the number of a certain token in a sequence. \n- Besides a more comprehensive evaluation, I also encourage the authors to strengthen their findings by trying selective attention on larger-scale models and datasets" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The authors mention that they compute the S matrix using \"one of the existing heads\" (#167). Their code indicates that they simple take the first one. Is this ablated in any way? Does taking a different head / an average of all heads make much difference?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The method is simple and intuitive\n- Results are very strong and consistent, showing both accuracy improvements, and very impressive memory reduction\n- Experiments are quite extensive\n- Given these results, overall I tend to agree with the authors' statement in Section 9: \" Given that it adds no new parameters, only a negligible amount of compute, and provides consistent improvements, *selective attention might be a good default for transformer decoders*.\"" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents selective attention, a method for adjusting the attention weight of tokens based on the amount of attention they received from previous tokens. The authors also propose to use this method to drop some of the history tokens with the lowest (adjusted) weight, also proposing a new loss term to transformer-based LLM training. Extensive experiments show that this method both consistently improves performance, and reduces memory requirements substantially." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While I like this paper a lot, I have several reservations.\n\nFirst, the level of novelty is not huge. The idea of pruning the least important token in each iteration is similar in spirit to cache-pruning methods such as H20 (Zhang et al., 2023), TOVA (Oren et al., 2024) and others. The authors basically apply a similar approach at training time. While this method works great (as noted above), which is a sufficient contribution on its own, and also present some novel concepts (e.g., the new loss term), it would be better to tone down the novelty claims a bit.\n\nSecond, I am not sure I am fully convinced by some of the major claims in the paper.\n \n(a) the examples in Section 2 are very appealing, but they are missing a critical baseline: how much attention does the vanilla transformer assign the pruned tokens in such cases? Perhaps it already knows it needs to ignore them? Again, the method works great so obviously something good is happening, but it is not clear the intuition is capturing the essence of it.\n\n(b) similarly, the authors say (#113) \"if token b has determined that token a is irrelevant or even misleading to future tokens such as c, there is nothing it can do in the given layer to correct for this.\". But I am not sure how their method allows the transfer of such negative signal. If I understand correctly, the authors penalize tokens that received high attention in previous steps (I.e., their value in F is high). I am not sure how information about irrelevant or misleading tokens can be propagated in such cases. \n\nThird, while the writing is generally clear, I found section 3 a bit confusing, requiring several reads to understand a fairly simple method. For instance, it is never explicitly mentioned that each token is penalized by the amount of attention it got from all subsequent tokens. I would also advise the authors to formally define masking, as it is not entirely trivial in this context." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024selective,\ntitle={Selective Attention Improves Transformer},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v0FzmPCd1e},\nnote={under review}\n}" }, "abstract": { "value": "Unneeded elements in the attention's context degrade performance. We introduce Selective Attention, a simple parameter-free change to the standard attention mechanism which reduces attention to unneeded elements. Selective attention consistently improves language modeling performance across model sizes and context lengths. For example, a range of transformers trained with the language modeling objective on C4 with selective attention perform equivalently to transformers with standard attention modules with ~2X more parameters and heads. In addition, selective attention allows reducing the size of the attention's context buffer, leading to substantial reductions in the memory and compute requirements during inference. For example, transformers with 100M parameters and context sizes of 512, 1,024, and 2,048 need 16X, 25X, and 47X less memory for their attention module, respectively, when equipped with selective attention, as those without selective attention, with the same validation perplexity." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "selective attention", "attention", "transformer", "llm", "language model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e7294ff543e60337e66031c2be9d624a96b4b63b.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Selective Attention Improves Transformer" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v0O9FrVTt1
Adaptive Source Localization on Complex Networks via Conditional Diffusion Model
main
Active
Diffusion Model;Knowledge Informed Machine Learning;Source Localization;Complex Network
learning on graphs and other geometries & topologies
5;5;5;5
3;5;4;4
3;2;2;2
2;2;3;2
3;3;4;3
5
4
2.25
2.25
3.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How does the computational complexity of ASLDiff compare to other state-of-the-art methods, especially for large-scale networks?\n\n2. Can the authors provide more insights into the model's performance on large real-world networks (e.g., millions of nodes)?\n\n3. Why the performance of ASLDiff under the SIS diffusion pattern is inconsistent?\n\n4. Can the proposed model handle dynamic networks where the topology changes over time?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper combines diffusion models with principles of information propagation, offering a unique solution to the source localization problem.\n\n2. ASLDiff leverages pre-calculated source estimations as informative priors, potentially improving efficiency and effectiveness.\n\n3. The authors test their model on both synthetic and real-world datasets, comparing it against several state-of-the-art methods.\n\n4. The proposed model shows promise for some real-world scenarios, such as epidemiology, and cybersecurity." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces ASLDiff, a new method for finding information sources in complex networks. ASLDiff combines diffusion models with information propagation principles to accurately locate sources across different network types and patterns. It uses pre-estimated source locations as guides, a diffusion process led by these estimates, and a GCN (Graph Convolutional Network) to capture key propagation details. ASLDiff outperforms certain existing methods, achieving up to 7.5%-12.1% higher accuracy on some real-world datasets and adapting effectively to some networks and scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the authors use some real-world datasets, more extensive testing on diverse real-world datasets could further validate the model's effectiveness.\n\n2. The paper doesn't discuss the computational complexity or resource requirements of ASLDiff compared to other methods.\n\n3. The paper does not sufficiently address the scalability of the proposed method for large networks (millions of nodes), which is essential for real-world applications. The largest tested graph contains fewer than 15K nodes.\n\n4. ASLDiff’s performance under the SIS diffusion pattern is inconsistent. Only three datasets are used, with performance varying across datasets and metrics. For instance, there is no improvement in AC on the Net and Jazz networks, and only a marginal increase from 0.984 (GCNSI) to 0.985 on the Power network." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "The authors’ research scope is so wide that it may be incredible. They claim that they can solve problems in various fields, such as disease outbreaks, network security, etc. This is a great plan, however, maybe it is an exaggeration. The spread of infectious diseases and the spread of misinformation vary greatly. Maybe it is too ideal to solve these problems with only one model? This is my main concern for this paper, as complex networks are a very vast and intricate field.\n\nThe author's review is complete, but there are relatively few comparative methods published in the year 2024. Compared with the newest algorithms in the year 2024, can ASLDiff still maintain its advantage?\n\nAfter setting up the required environment for the author's code, I encountered an error stating \"models.guide: No such file or directory.\" I can't verify the results of the model, even though it is impressive." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "ASLDiff can train based on the few-shot learning.\n\nThe appendix contains a sufficient review of related work, indicating the adequate preparations for the proposed ASLDiff.\n\nASLDiff using a single snapshot is superior to multiple-snapshot-based works. I think it is awesome!" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In the social network background, the author employs the diffusion model to implement source detection tasks across different localization scenarios. The proposed ASLDiff can be trained in a few-shot manner." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "One of the contributions of this paper is the development of few-shot learning due to the limited data available in real-world scenarios. This should be a key focus for the author; however, it seems that the main body of the paper provides a very limited description of few-shot learning. Moreover, despite the thorough survey and preparations of the work, there are many details that need attention. For example, 'Diffusion' instead of 'Diffsion'; Equation (1), Equation (7) instead of Equation 1, equation 7. Also, providing code is intended to enhance confidence for your work, but the code is not runnable." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "see Weaknesses" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper provides comprehensive experimental validation of ASLDiff.\n2. The code of ASLDiff is given.\n3. ASLDiff shows significant improvement on some datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an Adaptive Source Localization Diffusion Model to face the challenge of data scarcity without specific propagation models. And evaluations of various propagation patterns and real network datasets demonstrate ASLDiff’s effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. As the authors mention, the diffusion model is highly complex. However, the extent of this complexity is not quantified. Additionally, considering the high complexity, the motivation for using the diffusion model should be fully explained.\n2. The authors discuss the challenge that \"real-world networks typically exhibit unknown propagation patterns\", but they do not explain or demonstrate how ASLDiff understands different propagation patterns in different scenarios.\n3. Sections 2.1 and 3.2 are confusing, as they both seem to explain the IC and LT models redundantly.\n4. The authors have demonstrated that the closeness centrality of infected nodes and sources is very consistent, but it appears that the advisor used in ASLDiff is not based on closeness centrality." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "If LPSI is used to generate enough training samples to train the DL-based method, is it possible to achieve an approximation with adaptability, and what are the advantages of using a diffusion model in comparison?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "S1. The problem of source localization is important.\n\nS2. The problem statement and the rationale behind the method are clear.\n\nS3. The paper is a reasonable attempt to apply diffusion modelling to traditional propagation problems.\n\nS4. The evaluation is very comprehensive. The authors show strong quantitative results. Tables 1-2, with figs 3-6 give strong qualitative results and ablation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the source localization problem: given a graph with infection states of all nodes, aim to find the original propagation source. The paper proposes ASLDiff which utilizes diffusion model for source localization in complex networks and applies GNNs to enhance the model’s adaptability to diverse network topologies. Besides, it incorporates soft labels and a restructured label propagation process to capture essential propagation characteristics across various network topologies. However, the technique novelty is limited and methods are capped which subject to labelling dissemination methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1. The forward process (e.g. the diffusion process) should be detailed. What is the state of the initial $X$ and what is the $X$ after noise added? Besides, it seems that you used the conditional diffusion method, but your condition is an approximate solution obtained from LPSI. This usually does not conform to our intuition, and I hope you can elaborate on the rationality of using the approximate solution of LPSI as a condition. And will using approximate solutions as conditions greatly limit the performance of your proposed model? If this impact does not occur, you should provide some theoretical proof. This is evident in the accuracy results, where ASLDiff underperforms compared to LPSI on some metrics.\n \nW2. The description of the method in the paper is unclear, and the innovative aspects of the technical details are not sufficiently articulated. For example, the part of the LPSI advisor that employs the conditional diffusion model lacks an explanation of how the diffusion model introduces innovation for the specific task.\n\nW3. The framework diagrams in the paper are not sufficiently clear. For instance, in Figure 1, the function $f(\\theta)$, which corresponds to the denoising network, is not intuitive. It is suggested to use clearer module diagrams to explain this. Additionally, the input-output relationships in Figure 2 are unclear. For example, the concept of $Y_{label}$ is not explicitly defined in the text, even though the generation method is explained. Further clarification is recommended." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a diffusion-based source localization method that can directly applied to real-world data in zero-shot manner after pretraining on simulation data with known propagation patterns and simple network topology." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024adaptive,\ntitle={Adaptive Source Localization on Complex Networks via Conditional Diffusion Model},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v0O9FrVTt1},\nnote={under review}\n}" }, "abstract": { "value": "Network propagation issues like the spread of misinformation, cyber threats, or infrastructure breakdowns are prevalent and have significant societal impacts. Identifying the source of such propagation by analyzing snapshots of affected networks is crucial for managing crises like disease outbreaks and enhancing network security. Traditional methods rely on metrics derived from network topology and are limited to specific propagation models, while deep learning models face the challenge of data scarcity. We propose \\textbf{ASLDiff}~(\\textbf{A}daptive \\textbf{S}ource \\textbf{L}ocalization \\textbf{Diff}sion Model), a novel adaptive source localization diffusion model to achieve accurate and robust source localization across different network topologies and propagation modes by fusing the principles of information propagation and restructuring the label propagation process within the conditioning module. Our approach not only adapts to real-world patterns easily without abundant fine-tuning data but can also generalize to different network topologies easily. Evaluations of various datasets demonstrate ASLDiff's superior effectiveness, accuracy, and adaptability in real-world applications, showcasing its robust performance across different localization scenarios. The code can be found at https://anonymous.4open.science/r/ASLDiff-4FE0." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion Model", "Knowledge Informed Machine Learning", "Source Localization", "Complex Network" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9980d331fd11893cc30889890efe7447dd775a86.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Adaptive Source Localization on Complex Networks via Conditional Diffusion Model" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v1B4aet9ct
Schur's Positive-Definite Network: Deep Learning in the SPD cone with structure
main
Active
sparsity;graphical lasso;lasso;deep learning;neural networks
unsupervised, self-supervised, semi-supervised, and supervised representation learning
6;6;8
4;4;4
3;2;3
3;2;3
3;3;4
6.666667
4
2.666667
2.666667
3.333333
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "•\tAlthough the proof is straightforward, it would be useful for the reader and for completeness to explain how Eq. (3) is derived in proof.\n•\tFor clarify, the role/design rationale behind each term in Eq. (5) can be explained briefly, although it is an existing method.\n•\tIn line 364, it is mentioned that the MSE reconstruction loss is used. How is this implemented together with the GLasso loss in Eq. (5)?\n•\tHow does the proposed method perform in terms of training time/cost?\n•\tWhat do different colours mean in Figure 1?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "•\tTo my knowledge, the proposed method is original. I like the proposed SPODNET, building on classical theory, simple but elegant.\n•\tThe paper is well written and well structured.\n•\tExperiment design is appropriate for demonstrating the effectiveness of the proposed method. I particularly like the result of UBG in Figure 6, highlighting more distinct structure that is interesting for this real-world data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new method SPODNET for learning SPD matrices by elements, supported by the classical Shur’s condition, where the matrix elements (u and v) to update are learned using neural networks. The work demonstrates the use of SPODNET for the sparse precision matrix learning task, and proposes three new model architectures to perform the learning, including UBG, PNP and E2E. Two sets of experiments were conducted for evaluation, using a synthetic data and a real-world data. The results show the effectiveness of the proposed methods, and their advantages over the compared ones." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "•\tThe paper would be stronger if they could include another real-world learning problem over SPD manifold. But I don’t see this as a major issue.\n•\tLack of discussion on the limitation of the proposed work.\n•\tThere are a couple of things that could be explained better, see my questions.\n•\tFigures 4 and 5 are too small, hard to read." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Training details: What are the exact steps of the unrolling algorithm? How many unrollings are needed? From line 363, does the training posit an MSE loss on the intermediate $\\Theta$s or only the last one?\n2. GLasso: In Fig. 5, the F-1 performance of GLasso decreases when $n$ gets larger. This is weird because I expect it to recover the graph perfectly when $n\\gg p$. What are the authors' explanations about this?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is well-written and easy to follow. The idea of unrolling the column-row BCD algorithm to ensure SPD seems novel." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a deep learning-based approach to solving SPD problems such as covariance selection. The authors start with the block coordinate descent (BCD) algorithm and then unroll the optimization using neural networks. This follows with three different unrollings, where each preserves different levels of the problem structures (or inductive bias in the deep learning words). The authors evaluate the proposed SpodNet on synthetic data and the animal dataset against GLAD, GLasso, and other traditional approaches." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I think this is a borderline paper in its current form. I value the novelty of the paper, but its numerical performance is not the most convincing. I think solving the following points will make the paper stand more firmly at my rating.\n\n1. GLAD-Z: SpodNet's NMSE performance on synthetic data seems to be consistently worse than GLAD-Z's. I understand the authors' argument that GLAD-Z is not SPD, but what if Z is projected onto the SPD cone? Will the projected Zs remain the lower NMSE scores? Because GLAD uses an ADMM-like algorithm, the learned $\\Theta$ matrices are not projections if I understand correctly.\n2. Large sample regime: The NMSE performance of SpodNet is no better or only marginally better than the baselines when $n>p$.\n3. Real-world datasets: The results of GLAD on the animal dataset are missing. Also, the paper will benefit from adding at least another real-world dataset.\n4. Figures: Some figures can be hard to read, especially Fig. 4-5. I suggest the authors use thinner lines and/or redesign their layout to make line plots larger." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See above" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. SpodNet enables increased expressivity for estimating SPD matrices with a neural network based parametrization, at the same time maintaining constraints like sparsity unlike other model based approaches, as verified by experiments on synthetic data.\n2. In order to make the method tractable the authors leverage the block structure of SPD matrices to restrict the complexity of each update step to $O(p^2)$.\n3. Experiments on synthetic data and graph topology estimation using SpodNet are presented, highlighting the effectiveness of the method. On the synthetic data, the proposed method is similar to model based approaches while achieving both SPD and sparsity at the same time." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a method for jointly learning Symmetric Positive Definite (SPD) and sparse matrices using a learning based approach. The proposed method SpodNet is an iterative method which learns column-row pairs stepwise where each step is parametrized by a neural network. The SPD constraint is then ensured via Schur’s condition." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed SpodNet provides a novel approach to leverage neural networks and increase expressivity on constrained manifolds. However, the design of the neural networks itself is not discussed in detail. It is not clear to me why the authors choose the input features for $g$ as $\\theta_{22}$, $s_{22}$ and $\\theta_{12} \\theta_{11}^{-1} \\theta_{12}$. Similar for each of the three described approaches the explanation for choosing the input features is missing. I assume the features are chosen to best suit the model based approach and that performs well with gradient descent but the paper would benefit from a detailed explanation of the same.\n2. The method still seems relatively expensive in spite of the improved update rule. The overall cost as the authors mentioned is of the order of $O(p^3)$, how does this compare with the other model based approaches?\n3. Can the SpodNet framework maintain other structure constraints for example structural sparsity. In general what conditions would the constraints need to satisfy in order to be optimized with a SpodNet layer.\n\nSince the general literature of SPD matrix estimation points towards applications in computer vision, it would be informative to see an experiment for a vision task with SpodNet to verify the comparison with baselines and its scalability given its computational requirements." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a novel and generic learning module with guaranteed SPD outputs that can jointly handle additional structural constraints such as sparsity." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024schurs,\ntitle={Schur's Positive-Definite Network: Deep Learning in the {SPD} cone with structure},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v1B4aet9ct},\nnote={under review}\n}" }, "abstract": { "value": "Estimating matrices in the symmetric positive-definite (SPD) cone is of interest for many applications ranging from computer vision to graph learning. While there exist various convex optimization-based estimators, they remain limited in expressivity due to their model-based approach. The success of deep learning motivates the use of learning-based approaches to estimate SPD matrices with neural networks in a data-driven fashion. However, designing effective neural architectures for SPD learning is challenging, particularly when the task requires\nadditional structural constraints, such as element-wise sparsity. Current approaches either do not ensure that the output meets all desired properties or lack expressivity. In this paper, we introduce SpodNet, a novel and generic learning module that guarantees SPD outputs and supports additional structural constraints. Notably, it solves the challenging task of learning jointly SPD and\nsparse matrices. Our experiments illustrate the versatility and relevance of SpodNet layers for such applications." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "sparsity", "graphical lasso", "lasso", "deep learning", "neural networks" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2dbc07f8f1e6619f2154206d2ab38196f6213603.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/197819d45d9049396b52ffbccdf0b5897c6ac90e.zip" }, "title": { "value": "Schur's Positive-Definite Network: Deep Learning in the SPD cone with structure" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v1OQ0kNq0w
MotionRL: Align Text-to-Motion Generation to Human Preferences with Multi-Reward Reinforcement Learning
main
Active
Motion Generation; Reinforcement Learning;
applications to computer vision, audio, language, and other modalities
5;5;5;6;6
4;4;4;3;3
2;3;2;2;3
2;2;2;2;2
1;3;3;2;3
5.4
3.6
2.4
2
2.4
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. “Aligning Motion Generation with Human Perceptions” by Wang et al., 2024, utilizes their proposed motion perception model as an effective supervision signal for training a motion generator. This paper similarly uses the motion perception model from Wang et al. as a supervision signal to aid motion generation. What are the distinction and novelty of this paper's method compared to the approach of Wang et al.?\n2. The use of **skeleton** rendering in the user study section for evaluation could be more reasonable, as there is a gap between **skeleton** and **SMPL** rendering. Some unnatural details are more noticeable in SMPL, using SMPL rendering for the user study might be more convincible." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper introduces human preference into the text-to-motion generation task by applying a motion perception model from the motion generation field. Additionally, two other rewards (text adherence and motion quality) are used to prevent potential degradation caused by the human preference reward. Experimental results demonstrate the superiority of the proposed approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes incorporating human preference priors into the text-to-motion task to enhance the quality of generated motions. Through the use of reinforcement learning and a motion perception model, the paper construct a human preference reward, enabling the generation model to learn human perception. To prevent degradation in other performance metrics resulted by human preference reward, the paper introduces a motion quality reward and a text adherence reward, forming a proposed multi-reward system. To mitigate potential training instability caused by multiple rewards, Pareto optimality is employed to balance the different rewards. The experimental results demonstrates superior performance in FID, R-Precision, human perceptual model scores, and user studies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The most concerning limitation of this paper lies in its novelty. Introducing human perception into the field of motion generation is not novel, as also mentioned by this paper in line#86-87 (Voas et al., 2023; Wang et al., 2024). Wang et al. utilizes their proposed motion perception model as an effective supervision signal to finetune the motion generator. This paper similarly uses the motion perception model from Wang et al. as a supervision signal to train model for motion generation. Given these, the technical novelty is limited.\n\nThere are also insufficiencies in the experimental section: **1)** The main insufficiency is that the ablation study does not adequately demonstrate the effectiveness of the proposed multiple rewards. Table 2 lacks a baseline ablation result with no rewards applied, making it difficult to confirm the method’s effectiveness. **2)** In text-to-motion generation tasks, methods are typically validated on both the HumanML3D and KIT-ML datasets, but this paper only provides results for HumanML3D. **3)** Additionally, on the HumanML3D dataset, the MModality metric is usually reported; however, this paper neither provides MModality results nor explains why it was omitted. **4)** For the Diversity metric, Diversity closer to real test data is preferable, yet this paper labels higher values as better.\n\nMinor comments:\n1. The paper does not explain the metrics reported in the tables, such as Diversity.\n2. The “Conclusion and Future Work” section does not include future work but instead links to the appendix.\n3. Descriptions for figures should not be placed in the appendix; for example, <mm> and <mt> in Fig. 5 lack explanation in the main text." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How does MotionRL handle the trade-offs between text adherence, motion quality, and human preferences during the training process?\n\nWhat are the limitations of relying on pre-trained human perception models for aligning generated motions with human preferences, and how can these limitations be addressed in future work?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Human-Centric Optimization: MotionRL uniquely incorporates human preferences into the optimization process, ensuring that the generated motions align better with human perception. This focus on human feedback addresses the limitations of traditional metrics that may not fully capture the nuances of human motion quality.\nMulti-Objective Optimization: The use of a multi-reward reinforcement learning framework allows MotionRL to balance multiple objectives simultaneously, such as text adherence, motion quality, and human preferences. This approach ensures that the generated motions are not only accurate and high-quality but also meet the subjective preferences of users.\nPareto Optimality: MotionRL introduces a novel multi-objective optimization strategy to approximate Pareto optimality. By selecting non-dominated points within each batch, the model learns to balance different rewards effectively, leading to more stable and optimal training outcomes. This method enhances the overall performance across various metrics compared to other algorithms." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a novel approach that leverages Multi-Reward Reinforcement Learning to optimize text-to-motion generation tasks. Unlike previous methods that primarily focus on numerical performance metrics, MotionRL incorporates human preferences to enhance the alignment of generated motions with human perception. The approach uses a multi-objective optimization strategy to balance text adherence, motion quality, and human preferences, aiming for Pareto optimality. Extensive experiments and user studies demonstrate that MotionRL significantly outperforms existing methods in generating high-quality, realistic motions that align well with textual descriptions and human feedback. This work represents a significant advancement in the field of text-driven human motion generation, offering a more nuanced and human-centric approach to evaluating and improving motion quality." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Dependence on Pre-trained Perception Models: MotionRL relies on pre-trained human perception models to capture complex human perceptual information. If these models are not of high quality or do not accurately reflect human preferences, the performance of MotionRL could be significantly impacted. This dependency limits the flexibility and robustness of the approach.\n\nLimited Dataset Size: The text-to-motion domain typically has smaller datasets compared to other fields like image or text generation. This limitation makes the model more sensitive to small changes in input text and can lead to overfitting. The smaller dataset size also poses challenges in effectively training the model to generalize well across diverse motion scenarios.\n\nComplexity of Multi-Objective Optimization: While the multi-reward optimization strategy aims to balance text adherence, motion quality, and human preferences, it introduces significant complexity into the training process. Managing and fine-tuning multiple rewards can lead to unstable training and requires careful calibration to ensure that the model does not prioritize one objective at the expense of others. This complexity can make the approach less accessible and harder to implement effectively." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Could the authors elaborate on the potential for dynamically adjusting the weighting of rewards based on real-time feedback?\n\nHow does the model handle scenarios where text descriptions are ambiguous or open to interpretation? Is there a mechanism to weigh certain rewards more heavily in such cases?\n\nFor practical applications, are there plans to develop more user-friendly interfaces for non-technical users to fine-tune MotionRL’s generated motions?\n\nPlease also provide some details about the dataset you created" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper introduces a novel approach by combining reinforcement learning with human preference alignment for text-to-motion tasks, which is an underexplored area in motion generation. The application of multi-reward RL to text-to-motion represents a good contribution, especially as it integrates human perceptual feedback." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper \"MotionRL: Align Text-to-Motion Generation to Human Preferences with Multi-Reward Reinforcement Learning\" introduces MotionRL, a framework that leverages Multi-Reward Reinforcement Learning to enhance text-to-motion generation by aligning outputs with human preferences. Unlike prior models, MotionRL incorporates human perceptual data in its reward structure to address limitations in previous approaches focused on numerical metrics alone. MotionRL employs a multi-objective optimization strategy to achieve a balance between text adherence, motion quality, and human preferences, approximating Pareto optimality for optimal trade-offs. The authors demonstrate the model's effectiveness through extensive experiments and user studies, showing MotionRL's superior performance in terms of perceptual quality and alignment with human expectations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The model’s reliance on pre-trained perception models, as mentioned in the limitations, could restrict generalizability. Fine-tuning without additional human annotations might limit the model’s adaptability to new or unique datasets, where the pre-trained perception model may not fully capture the nuances of human preferences.\n\nWhile the multi-reward RL framework is effective, there is limited discussion on dynamically adjusting the weight of each reward in real-time based on specific user feedback. A more adaptive reward weighting mechanism could further enhance user-centered customization.\n\nAlthough the human preference model provides valuable perceptual data, additional insights from continuous human interaction could help refine the model further. The paper would benefit from exploring how human feedback could be iteratively incorporated to improve long-term model performance, potentially through ongoing human-in-the-loop adjustments." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* How did you come up with the balance between the three different rewards?\n\n* How is the pareto-based policy gradient optimization different from simple PPO?\n\n* What is the demographic data for the user study, did you do any formal survey? \n\nMinor things:\n- You introduce the abbreviation \"RL\" multiple times in the text\n- Line 200: the Appdix -> the Appendix" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* Introducing RL with the set of reward functions to optimize for multiple objectivesin text-to-motion generation seems to be a novel approach to that specific domain.\n\n* The idea is simple and easy to understand which is also further improved by a good presentation and writing. The images helps me understand what is the problem you're trying to solve.\n\n* I find the ablation in table 2 important as it shows that human perception does not always align with the metricses typically used." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper offers MotionRL, an approach for optimizing text-to-motion generation tasks using Multi-Reward RL. This method aims to align the generated motions with human preferences more effectively than traditional models by incorporating human perceptual models into the RL training process. This is done by employing a multi-objective optimization strategy to balance text adherence, motion quality, and human preferences, approximating Pareto optimality. The paper provides experiments, demonstrating that the proposed model outperforms baselines on traditional metrics and in user studies assessing human perception." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The overall novelty of the method is modest at best, but the domain application seem to be novel.\n\n* There is a great lack of details about the user study and \"volunteers for evaluation\" could mean anyone. Given that the participants was called \"volunteers\" (and not paid?) it seems like it's not randomly recruited people which makes it hard to evaluate the soundness of the user study. \n\n* The paper uses up all 10 pages but given that it's a simple idea with limited technical contribution I think it's excessive, especially since the limitation was pushed to the appendix." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. In line 95, it seems that the purpose of introducing RL is merely to supervise human preference, and the rationale for using RL isn’t well explained.\n\n2. The concept of Pareto optimality lacks a citation—what specific issue does this?\n\n3. There are factual inaccuracies in the stated contributions, as ReinDiffuse appears to have introduced RL into motion before this paper.\n\n4. The supplementary materials don’t clearly explain their purpose, which only provides some generated samples. \n\n5. The paper contains several grammatical errors. The title should change \"Align\" to \"Aligning\". A layout issue in line 292." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The experimental metrics show good results, and a motion generation framework based on RL has been designed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces MotionRL, a reinforcement learning (RL)-based framework for multi-objective optimization in human motion generation. Recognizing that prior methods often overlook human perceptual nuances, MotionRL integrates human perceptual priors into the generation process through a reward structure. The framework balances three objectives: human perception, motion quality, and text adherence. To address the challenges of optimizing multiple objectives, MotionRL incorporates a multi-reward optimization strategy that approximates Pareto optimality by selecting non-dominated points in each batch, ensuring better trade-offs across objectives." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The reason for introducing RL is not well explained." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024motionrl,\ntitle={Motion{RL}: Align Text-to-Motion Generation to Human Preferences with Multi-Reward Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v1OQ0kNq0w},\nnote={under review}\n}" }, "abstract": { "value": "We introduce \\textbf{MotionRL}, the first approach to utilize Multi-Reward Reinforcement Learning (RL) for optimizing text-to-motion generation tasks and aligning them with human preferences. Previous works focused on improving numerical performance metrics on the given datasets, often neglecting the variability and subjectivity of human feedback. In contrast, our novel approach uses reinforcement learning to fine-tune the motion generator based on human preferences prior knowledge of the human perception model, allowing it to generate motions that better align human preferences. In addition, MotionRL introduces a novel multi-objective optimization strategy to approximate Pareto optimality between text adherence, motion quality, and human preferences. Extensive experiments and user studies demonstrate that MotionRL not only allows control over the generated results across different objectives but also significantly enhances performance across these metrics compared to other algorithms." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Motion Generation; Reinforcement Learning;" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/aabb496632c83d612ab54e5bebaa9a510cb0f077.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/1deeb9f4eb3998be4f76115a8debdb13a56484d8.zip" }, "title": { "value": "MotionRL: Align Text-to-Motion Generation to Human Preferences with Multi-Reward Reinforcement Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v1f6c7wVBm
AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction
main
Active
Surface Reconstruction;Neural Radiance Field
applications to computer vision, audio, language, and other modalities
5;5;6;6;8
4;4;5;4;4
2;2;3;3;3
2;3;2;2;3
2;3;3;3;3
6
4.2
2.6
2.4
2.8
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Add real-world experiments and more benchmark datasets (with larger scene scale).\n2. Add visualizations to the decomposed appearance throughout the figures in the paper.\n3. Add missing details in the method section.\n4. Add some comparison on the train - test efficiency and memory footprint.\n4. Revised the discussions and ablation studies as suggested in the weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well written and easy to follow. The idea is clean and the pipeline do not introduce additional hyper-parameter tuning and selection compared to some other recent methods for neural surface reconstruction / rendering.\n2. The idea to fuse multi-resolution grids for detailed surface reconstruction is novel. AniSDF’s fused-granularity structure balances high- and low-resolution information to improve convergence and accuracy. This approach allows for a more adaptive reconstruction that captures both overall structure and fine details, which is validated by their good geometry quality (chamfer) in the experiments.\n3. The use of ASG encoding in appearance modeling seems to be effective and handles specular reflections very well." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a high-quality 3D surface reconstruction method- AniSDF, which learns fused-granularity neural surfaces with physics-based encoding. The authors propose fused multi-resolution grids for geometry modeling, and adopt Anisotropic Gaussians for appearance modeling. With these designs, AniSDF can reconstruct objects with complex structures and produce high-quality renderings on benchmarked datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Experiments for reflective surfaces mainly come from synthetic data, it would be helpful to understand the model’s ability if we could see results of more real-world reflective surface data, such as trucks from Tanks and Templates, sedans from Refnerf, The Glossy-Real dataset from Nero.\n2. The appearance modeling involves blending view-based and reflection-based radiance fields, however, the method's ability to decompose base color and reflection color is unknown. It would be better if the author could add a visualization of view-based color, reflection-based color, and the blending weight.\n3. Some details of the methods are missing. The derivation of normal and the method for mesh extraction are not discussed.\n4. The reason behind the choice of grid level `m` and `l` is not clear. It would be clearer if an ablation study about grid resolution were added." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. For Eq. 10, why would both c_view and c_ref depend on view directions? Would it encourage better diffuse-specular separation if one makes c_view view-independent?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper is well-written and easy to follow. \n2. Qualitative and quantitative results of the proposed method seems very strong, beating prior baselines like NeuS, RefNeRF, RefNeuS. Reconstructed meshes look clean with high-quality surface normals. \n3. The authors validate the proposed method on both synthetic datasets (Nerf-synthetic and Shiny-blender), and real datasets (DTU), and it's nice to see improvements." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper provides a fused-granularity neural surfaces with physics-based anisotropic\nspherical Gaussian encoding for high-fidelity 3D reconstruction. The authors show state-of-the-art novel-view rendering and geometry reconstruction results on several datasets, including NeRF-Synthetic, Shiny Blender, and DTU datasets. The proposed method shows very convincing reconstruction of challenging specular and furry objects." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. To further convince me about the method's performance on reconstructing specularity, I would need to see the view synthesis videos, as opposed to the static frames shown in the paper and project page. Unfortunately, I could not find such videos (except the relighting ones). \n\n2. For the proposed blended radiance field (Eq. 12), I think it would be great to provide some visualizations of the individual components: w, c_view, c_ref, to better understand what each learnt components look like. \n\n3. I'm unsure why the fused-granularity hash grids actually works better than a plain multi-resolution hash grids. It seems to me that the major difference of it from a plain one is the additional handcrafted equation (6) that says the final SDF is an addition of coarse and fine SDF. In the plain multi-resolution hash grid, the final SDF is predicted by a MLP from concatenated multi-resolution features. This could benefit some justification." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The reviewer would like to raise awareness of possible breach of double blind review rules.\n\nThe reviewer found a twitter page:\n\nhttps://x.com/zhenjun_zhao/status/1842119223646302292\n\nThis page introduces their paper with authors names explicitely shown." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Modify the typo in Line 093 (‘Oue’) to ‘Our.’\n\n2. The physical-based rendering method via ASG is similar to $\\cite{yang2024spec}$. Is this work also inspired by similar works using ASG to learn the specular representation in 3DGS?\n\n3. The blended radiance fields with ASG encoding are composed of $c_{view}$ and $c_{ref}$ though a learnable weight. According to Eq. 4, the light field is modeled by $c_d$ and $c_s$, diffuse color and specular color. So, can $c_{view}$ be regarded as purely diffuse and $c_{ref}$ as the pure composition of specular? If so, can the radiance fields be considered purely diffuse when $\\omega$ is 1? If not, can this work disentangle the light field to only diffuse or specular field?\n\n4. Refer to $\\cite{han2023multiscale}$, they control the final color by adding the scale to the color calculated from ASG when retaining the diffuse color term calculated from the first three orders of SH. So, what motivates adding weight to diffuse and specular in this work? What are the differences between your light field calculation in the blended radiance fields with $\\cite{han2023multiscale}$?\n\n@article{han2023multiscale,\n title = {Multiscale Tensor Decomposition and Rendering Equation Encoding for View Synthesis},\n author = {Kang Han and Weikang Xiang},\n journal = {Computer Vision and Pattern Recognition},\n year = {2023},\n doi = {10.1109/CVPR52729.2023.00412},\n bibSource = {Semantic Scholar https://www.semanticscholar.org/paper/aa41843888fffada6335b6c5cdbcd2d4bb5cf9da}\n}\n\n@article{yang2024spec,\n title={Spec-gaussian: Anisotropic view-dependent appearance for 3d gaussian splatting},\n author={Yang, Ziyi and Gao, Xinyu and Sun, Yangtian and Huang, Yihua and Lyu, Xiaoyang and Zhou, Wen and Jiao, Shaohui and Qi, Xiaojuan and Jin, Xiaogang},\n journal={arXiv preprint arXiv:2402.15870},\n year={2024}\n}" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Originality:\nThe paper explores the potential of parallel using coarse and fine hash-grid to replace the general sequential coarse-to-fine structure, demonstrating the effects of experiments. Besides, this paper combines SDF learning with blended radiance field learning with anisotropic spherical Gaussian encoding to distinguish material information.\n\nQuality:\nThe quality of the paper is good, evidenced by detailed experiments and comprehensive comparisons with state-of-the-art methods. \n\nClarity:\nThe paper is well-structured and organized. \n\nSignificance:\nThe good geometry that disambiguates the reflective appearance is helpful in 3D reconstruction. The possible relighting application makes this research meaningful." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces AniSDF, an approach to enhance the quality of SDF-based methods in geometry reconstruction and novel-view synthesis tasks, enabling the physically-based rendering ability. Firstly, AniSDF uses the parallel branch structure of coarse hash-grids and fine hash-grids, replacing the former sequential coarse-to-fine training strategy, to learn a fused-granularity neural surface to improve the quality of SDF. Secondly, AniSDF uses Anisotropic Spherical Gaussian Encoding to learn blended radiance fields with a physics-based rendering, disambiguating the reflective appearance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Reference is insufficient: From Lines 130 to 135, the Sec. 2.1 is related to the attempts to improve the reconstructed geometry of Gaussians. Since 2DGS is used to compare, it is no reason to cite some papers focused on doing similar jobs: improving the surface reconstruction of 3DGS, like $\\cite{guedon2023sugar, lyu20243dgsr, chen2023neusg}$.\n\n2. The ablation study of the fused-granularity neural surface is not enough. The ablation study shows the comparison with the sequential coarse-to-fine method, but the technique with only coarse hash-grid and only fine hash-grid should also be demonstrated to prove the observations shown at the beginning of Sec.3.2. It could be better if the training time comparison is also shown in this part.\n\n@article{guedon2023sugar,\n title={SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering},\n author={Gu{\\'e}don, Antoine and Lepetit, Vincent},\n journal={CVPR},\n year={2024}\n}\n@article{lyu20243dgsr,\n title = {3DGSR: Implicit Surface Reconstruction with 3D Gaussian Splatting},\n author = {Xiaoyang Lyu and Yang-Tian Sun and Yi-Hua Huang and Xiuzhe Wu and Ziyi Yang and Yilun Chen and Jiangmiao Pang and Xiaojuan Qi},\n year = {2024},\n journal = {arXiv preprint arXiv: 2404.00409}\n}\n@article{chen2023neusg,\n title = {NeuSG: Neural Implicit Surface Reconstruction with 3D Gaussian Splatting Guidance},\n author = {Hanlin Chen and Chen Li and Gim Hee Lee},\n year = {2023},\n journal = {arXiv preprint arXiv: 2312.00846}\n}" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Questions:\n1. I compared the chamfer distance metric of the Neuralangelo method reproduced on the DTU dataset in the paper, and there is a significant gap. The original paper reported an average of 0.61 (which surpasses your method), while the reproduced result in the paper is 1.07. Could the authors clarify the reasons for this discrepancy? Specifically, did you maintain the same hyperparameter settings as Neuralangelo? Please provide detailed information on your experimental setup.\n2. Section 3.2 of the paper lists some issues related to coarse grid and fine grid training, but there are no corresponding experimental supports for these claims. Regarding the use of the coarse to fine method, you pointed out that thin structures may be discarded in the early training stages. Could you provide visualizations of surface reconstruction at different training stages, along with corresponding quantitative metrics, particularly for Neuralangelo and Neus2? This would help us assess the advantages and disadvantages in the reconstruction of detailed structures. I noticed your experiments in the ablation study, but they do not specify the experimental settings and only show the final results." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tInnovative Approach: The use of fused-granularity neural surfaces combined with ASG encoding for 3D reconstruction is novel and effective in balancing both coarse and fine details, leading to improved geometry and appearance quality.\n2.\tHigh Performance: Experimental results show significant improvements in rendering quality and geometry reconstruction, with better handling of reflective, luminous, and fuzzy objects compared to existing methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents AniSDF, a novel SDF-based method for high-fidelity 3D reconstruction that incorporates fused-granularity neural surfaces and anisotropic spherical Gaussian (ASG) encoding. AniSDF aims to achieve accurate geometry reconstruction and photo-realistic rendering by addressing challenges in neural radiance fields, such as geometry-appearance trade-offs. The approach uses parallel fused-granularity neural surfaces to balance coarse and fine details, and blended radiance fields with ASG encoding for modeling both diffuse and specular appearances. Extensive experiments demonstrate AniSDF's superiority in both geometry reconstruction and novel-view synthesis over prior methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tLimited Real-Time Capability: AniSDF cannot perform real-time rendering, which limits its applicability in time-sensitive applications such as interactive graphics or augmented reality.\n2.\tComputation Cost: The use of multiple neural networks and high-resolution hash grids could be computationally expensive, which may hinder scalability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "It’s unclear whether the larger network or the fused-granularity neural surface structure is responsible. What would happen if we set both the fine and coarse grids to the same resolution, either that of the coarse grid or that of the fine grid?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Pros:\nThe paper is well-written and generally easy to follow.\nExperimental results demonstrate incremental improvements in PSNR, which support the proposed approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The AniSDF paper introduces an innovative approach to high-fidelity surface reconstruction and photo-realistic rendering from multi-view images. This is achieved through a synergistic geometry network and appearance network, which together enable high-quality 3D reconstruction. Additionally, the authors propose a fused-granularity neural surface that aims to balance overall structural integrity with fine detail preservation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Cons:\nThe fused-granularity neural surface structure may lack novelty, as it essentially uses two parallel structures with different resolutions. It seems likely that resolution choices could impact the final reconstruction quality. Including experiments that vary resolution settings would clarify their effect. \n\n\nDespite claims of high-quality mesh reconstruction, Chamfer Distance results reveal performance gaps on certain objects (e.g., \"Chair\" and \"Mic\" categories) compared to methods like Neus and NeRO. Explaining these discrepancies would help elucidate the limitations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024anisdf,\ntitle={Ani{SDF}: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v1f6c7wVBm},\nnote={under review}\n}" }, "abstract": { "value": "Neural radiance fields have recently revolutionized novel-view synthesis and achieved high-fidelity renderings. \nHowever, these methods sacrifice the geometry for the rendering quality, limiting their further applications including relighting and deformation. \nHow to synthesize photo-realistic rendering while reconstructing accurate geometry remains an unsolved problem. In this work, we present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction. Different from previous neural surfaces, our fused-granularity geometry structure balances the overall structures and fine geometric details, producing accurate geometry reconstruction. \nTo disambiguate geometry from reflective appearance, we introduce blended radiance fields to model diffuse and specularity following the anisotropic spherical Gaussian encoding, a physics-based rendering pipeline. With these designs, AniSDF can reconstruct objects with complex structures and produce high-quality renderings. \nFurthermore, our method is a unified model that does not require complex hyperparameter tuning for specific objects. \nExtensive experiments demonstrate that our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Surface Reconstruction", "Neural Radiance Field" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/612a8b13a1a4f68011420360d14bb62dac043d0c.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/abeb06a1d21aeb62e7b7d8264da9df458206efe7.zip" }, "title": { "value": "AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v1qNr99R5n
CURVALID: A Geometrically-guided Adversarial Prompt Detection
main
Active
Large language models;Adversarial attacks;Local Intrinsic Dimension;Curvature
generative models
3;3;5;6
4;5;4;2
3;1;2;3
2;2;2;3
2;2;3;2
4.25
3.75
2.25
2.25
2.25
-0.83887
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Several parts of the CurveLID metric requires fitting on training data of adversarial v.s. benign prompts. I’d expect that the performance will depend heavily on the coverage of training data distribution, but judging from the experimental results it seems the generalization is pretty descent, have the authors had any insights on why this is the case?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Novelty: The author revitalized LID score in detecting adversarial prompts by estimating it on the learned aggregated output of word level embeddings. It further proposes TextCurve to capture the extra curvature information around the prompt. The final decision takes account for both metrics via a learned linear combination.\n2. Extensive experimental evaluations. The paper includes a detailed amount of empirical evaluations and analysis to support the effectiveness of the proposed method, as well as providing insights into how different components affect its performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies adversarial prompt detection. It revitalizes metrics of LID from anomaly detection to this task, by estimating the score on a learned prompt-level representation instead of a suboptimal word-level one. The paper further proposes TextCurve score, and merges these two metrics into a single score for the final decision. The final binary classifier needs to be trained on a collect. Extensive empirical results and ablation study demonstrate the generation and robustness of the proposed metrics despite the need for training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. How is curveLID’s defenses against jailbreaking algorithms rather than static harmful prompt? It Whitebox methods are not applicable in this setting, but the author can potentially try some blackbox methods, such as DrAttack or Multijail? It seems that the authors include several defense methods in the Appendix, but the experiment seems not directly related to the proposed metrics (I could be wrong).\n2. Minor: the appendix is very rich in useful information so it cannot be ignored. Tut currently the organization is quite flattened so a bit difficult to track everything. Organizing it into sections/subsections (e.g. all ablations are grouped together, all analysis are grouped together) might improve its readability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I recommend the following:\n1. A complete theoretical overhaul establishing proper connections between differential geometry and their implementation\n2. Justification for architectural choices and comparison with alternatives\n3. Analysis of and solutions for the sequence dependency problem\n4. Comprehensive evaluation across a wider range of linguistic variations and attack types\n5. Complete technical details enabling reproducibility\n\nWhile the empirical results appear promising, they cannot overcome the fundamental theoretical and methodological issues present in the paper." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The paper introduces an innovative geometric framework (CurvaLID) that combines two complementary measures - Local Intrinsic Dimensionality (LID) and curvature - to detect adversarial prompts. Unlike existing approaches that rely on token-level analysis, the paper presents PromptLID which analyzes geometric differences at the prompt level, and TextCurv which captures semantic shifts at the word level.\n\n2. The paper achieves exceptional performance metrics like (i) Over 0.99 detection accuracy for English prompts, \n(ii) 0.994 accuracy for non-English prompts and (iii) Successfully reduces attack success rates to near zero\nIt also Includes detailed ablation studies showing the importance of both PromptLID and TextCurv components.\n\n3. The solution is highly efficient, requiring only 15 minutes of training on a single Nvidia H100 GPU, compared to competing methods that need up to 16 hours on more extensive hardware setups. The approach is model-agnostic, meaning it doesn't require access to or modification of the underlying LLM. The method also scales well to different languages without requiring retraining." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Summary:\nThis paper presents CurvaLID, a framework that claims to detect adversarial prompts in LLMs using geometric measures combining Local Intrinsic Dimensionality (LID) and curvature. While achieving reported empirical results of 99% detection accuracy, the paper suffers from fundamental theoretical flaws, questionable design choices, and significant limitations that are either unexplored or unacknowledged.\n\nMajor issues identified:\n\n1. The paper presents three classical curvature definitions (osculating circle, Whewell equation, differential geometric) but implements a formula (dθ/(1/||u|| + 1/||v||)) that has no clear mathematical connection to any of them. The reference to Whewell's equation (dφ/ds) lacks proper theoretical grounding. The authors don't define arc length, don't justify how embedding angles relate to tangential angles, and provide no proof of geometric equivalence to Whewell’s equation. The denominator term using inverse vector norms appears ad hoc and unfortunately comes with no geometric justification.Claims about semantic shifts relating to curvature are made without mathematical rigor or proof. The relationship between their geometric measures and adversarial behavior is assumed (lines 265-266) rather than proven. Therefore, there is a massive disconnect between the theory shown in Section 3 and what is finally implemented. \n\n2. The choice of architecture to use a CNN for text classification is unjustified by the authors. Why choose a CNN when there are more appropriate architectures (like Transformers, RNNs etc) available for sequential data?\nThe missing aspects are that: (i) there is no justification as to why CNNs would better capture textual properties. \n(ii) how are variable length sequences handled by a CNN? Is it via padding? \n(iii) How would a CNN capture long-range dependencies in text?\n(iv) CNNs typically rely on “spatial order”, what is the equivalent in this setting?\n\n3. In the multi-class classification definition, there is no proper specification given of the label space Q and its relationship to their task. It looks like it’s just a single label per training example. \n\n4. This paper provides no theoretical time and storage complexity analysis of their defense, especially for their key algorithms. While TextCurv computation is O(nd) for sequence length n and embedding dimension d, they don't discuss this in detail. The fact is that if “true differential geometry” metrics were used then the computational complexity is high and mostly polynomial time. \n\nThe complexity analysis for critical components should be presented. Such as CNN feature extraction, k-NN computation for PormptLID, Overall pipeline complexity including the MLP classifier. How does complexity scale with longer sequences or larger embedding dimensions or increased batch sizes?\n\nThis lack of complexity analysis makes it difficult to assess: i). Practical deployability of their method, ii). scalability to longer prompts, iii). real-time defense capabilities and iv). resource requirements for implementation\n\n\n5. The method relies critically on exact word order, making it vulnerable to simple paraphrasing. If there are basic linguistic variations like active/passive voice conversion, which retains the semantic meaning, this method would end up with two different representations (geometric measures) completely. This limitation isn’t mentioned or addressed. A simple text restructuring can circumvent this defense. \n\n6. This work presents a limited evaluation in terms of variety of attack types, even when focused on just single-step attacks. There is no analysis of the dataset diversity or potential overlaps between the 8 datasets proposed. There are no evaluations on social engineering attacks (as claimed in introduction) where they stop persuasion attacks for example. There is no testing of non-English attacks despite claims of language-agnostic performance. \n\n7. The CNN feature extraction process is not explained properly and leaves a lot of questions. The hyperparameter choices are arbitrary and do not come with a good justification. It would be nice to see training stability, convergence and sensitivity to initialization studies in the experiments. There must be comparisons made to move naive baseline approaches. \n\n8. There are some reproducibility concerns that must be highlighted too. Critical hyperparameters are left unspecified. Preprocessing details are left incomplete. There is no outline of the training environment and no discussion of potential implementation challenges that are faced.\n\n\nThe paper suffers from multiple fundamental flaws that significantly undermine its contribution. Currently, the paper presents a major disconnect between the presented theoretical framework (which is sophisticated and borrowed from differential geometry) and actual implementation (which is a heuristic). The sequence dependency problem reveals a fundamental limitation that makes the method vulnerable to simple linguistic variations, while the choice of CNN architecture for text processing appears arbitrary and poorly justified.\n\nAdditionally, the lack of thorough analysis, missing implementation details, and limited validation make it difficult to assess the method's true effectiveness and reproducibility. The combination of theoretical flaws, practical limitations, and methodological gaps warrants a rejection at this point.\n \nI recommend the following:\n1. A complete theoretical overhaul establishing proper connections between differential geometry and their implementation\n2. Justification for architectural choices and comparison with alternatives\n3. Analysis of and solutions for the sequence dependency problem\n4. Comprehensive evaluation across a wider range of linguistic variations and attack types\n5. Complete technical details enabling reproducibility\n\nWhile the empirical results appear promising, they cannot overcome the fundamental theoretical and methodological issues present in the paper." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper presents three classical curvature definitions (osculating circle, Whewell equation, differential geometric) but implements a formula (dθ/(1/||u|| + 1/||v||)) that has no clear mathematical connection to any of them. The reference to Whewell's equation (dφ/ds) lacks proper theoretical grounding. The authors don't define arc length, don't justify how embedding angles relate to tangential angles, and provide no proof of geometric equivalence to Whewell’s equation. The denominator term using inverse vector norms appears ad hoc and unfortunately comes with no geometric justification.Claims about semantic shifts relating to curvature are made without mathematical rigor or proof. The relationship between their geometric measures and adversarial behavior is assumed (lines 265-266) rather than proven. Therefore, there is a massive disconnect between the theory shown in Section 3 and what is finally implemented. \n\n2. The choice of architecture to use a CNN for text classification is unjustified by the authors. Why choose a CNN when there are more appropriate architectures (like Transformers, RNNs etc) available for sequential data?\nThe missing aspects are that: (i) there is no justification as to why CNNs would better capture textual properties. \n(ii) how are variable length sequences handled by a CNN? Is it via padding? \n(iii) How would a CNN capture long-range dependencies in text?\n(iv) CNNs typically rely on “spatial order”, what is the equivalent in this setting?\n\n3. In the multi-class classification definition, there is no proper specification given of the label space Q and its relationship to their task. It looks like it’s just a single label per training example. \n\n4. This paper provides no theoretical time and storage complexity analysis of their defense, especially for their key algorithms. While TextCurv computation is O(nd) for sequence length n and embedding dimension d, they don't discuss this in detail. The fact is that if “true differential geometry” metrics were used then the computational complexity is high and mostly polynomial time. \n\nThe complexity analysis for critical components should be presented. Such as CNN feature extraction, k-NN computation for PormptLID, Overall pipeline complexity including the MLP classifier. How does complexity scale with longer sequences or larger embedding dimensions or increased batch sizes?\n\nThis lack of complexity analysis makes it difficult to assess: i). Practical deployability of their method, ii). scalability to longer prompts, iii). real-time defense capabilities and iv). resource requirements for implementation\n\n\n5. The method relies critically on exact word order, making it vulnerable to simple paraphrasing. If there are basic linguistic variations like active/passive voice conversion, which retains the semantic meaning, this method would end up with two different representations (geometric measures) completely. This limitation isn’t mentioned or addressed. A simple text restructuring can circumvent this defense. \n\n6. This work presents a limited evaluation in terms of variety of attack types, even when focused on just single-step attacks. There is no analysis of the dataset diversity or potential overlaps between the 8 datasets proposed. There are no evaluations on social engineering attacks (as claimed in introduction) where they stop persuasion attacks for example. There is no testing of non-English attacks despite claims of language-agnostic performance. \n\n7. The CNN feature extraction process is not explained properly and leaves a lot of questions. The hyperparameter choices are arbitrary and do not come with a good justification. It would be nice to see training stability, convergence and sensitivity to initialization studies in the experiments. There must be comparisons made to move naive baseline approaches. \n\n8. There are some reproducibility concerns that must be highlighted too. Critical hyperparameters are left unspecified. Preprocessing details are left incomplete. There is no outline of the training environment and no discussion of potential implementation challenges that are faced.\n\n\nThe paper suffers from multiple fundamental flaws that significantly undermine its contribution. Currently, the paper presents a major disconnect between the presented theoretical framework (which is sophisticated and borrowed from differential geometry) and actual implementation (which is a heuristic). The sequence dependency problem reveals a fundamental limitation that makes the method vulnerable to simple linguistic variations, while the choice of CNN architecture for text processing appears arbitrary and poorly justified.\n\nAdditionally, the lack of thorough analysis, missing implementation details, and limited validation make it difficult to assess the method's true effectiveness and reproducibility. The combination of theoretical flaws, practical limitations, and methodological gaps warrants a rejection at this point." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Could the authors address the three weaknesses above?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is clearly written and the proposed method is described clearly." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes an adversarial prompt detection method using the geometric information within the latent representation of an input prompt. Specifically, prompt-level local intrinsic dimensionality using a CNN is proposed as a feature to distinguish the difference between adversarial and benign prompts. TextCurv is also proposed to account for semantic change at the word level, which is another feature to classify adversarial prompts. The proposed method is validated across several adversarial attacks in different LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The size of the data (2500 prompts) is quite limited. \n\n2. Persuasive adversarial prompts [1] will also lead to harmful content but not considered and tested in this work. \n\n3. The proposed detection looks similar to perplexity-based filter. It would be more convincing to include attacks that can bypass perplexity-based filter like AutoDAN [2].\n\n[1] Yi Zeng, Hongpeng Lin, Jingwen Zhang, Diyi Yang, Ruoxi Jia, and Weiyan Shi. 2024. How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by Humanizing LLMs. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics\n\n[2] Zhu, S., Zhang, R., An, B., Wu, G., Barrow, J., Wang, Z., Huang, F., Nenkova, A. and Sun, T., 2024. AutoDAN: interpretable gradient-based adversarial attacks on large language models. In First Conference on Language Modeling." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "n/a" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1 Have the authors tried different CNN configurations to assess their impact on the performance of PromptLID and TextCurv?\n\n2. Why did the authors choose to use CNN? If the goal is to capture relationships between textual features, wouldn’t Transformers perform better? Have you considered trying Transformers for this purpose?\n\n3. The method relies only on structural and semantic changes to identify adversarial samples. Have the authors tested against adversarial examples with structural and semantic characteristics similar to benign samples? Specifically, have the authors tried generating adversarial examples with density and curvature that closely resemble benign samples to evaluate CurvaLID’s performance?\n\n4. Have the authors analyzed the impact of text length on PromptLID? Given that longer texts have higher complexity and density, could they lead to benign texts being misclassified as adversarial?\n\n5. Have the authors tested whether non-standard benign samples, such as those containing spelling errors or slang, might be misclassified as adversarial examples?\n\n6. Can the method detect adversarial prompts that rely on contextual association across multiple prompts?\n\n7. The authors trained the model on Orca, MMLU, AlpacaEval, and TQA datasets and then used the same datasets to evaluate whether the samples were benign or adversarial. Given this setup, does the model’s high accuracy have practical significance?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The method has good advantages. Currently, most adversarial sample detection methods rely on large models, which makes the training process quite time-consuming. CurvaLID bypasses large language models entirely, directly analyzing text embeddings to identify adversarial inputs, and saving computational resources.\n\nAdditionally, the method is zero-shot, meaning it doesn't require fine-tuning for each LLM, making it even more versatile and efficient." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces CurvaLID, a model that detects adversarial prompts targeting LLMs. CurvaLID can capture the structural characteristics of entire prompts through \"Prompt-Level Local Intrinsic Dimensionality\" (PromptLID) and detects semantic shifts within the text using \"Text Curvature\" (TextCurv) to identify whether a prompt is adversarial. Since this method relies entirely on analyzing textual structure rather than LLM-specific architectures or parameters, it requires no fine-tuning for each LLM. This design significantly accelerates training speed, taking only 15 minutes on a single Nvidia H100 GPU." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1 I think that the innovation and contributions of your method are somewhat limited. The proposed approach resembles a CNN-based text filter, primarily detecting anomalies in embeddings, which isn't a novel concept.\n\n2 The connection between this work and LLMs is relatively weak. As mentioned, CurvaLID functions more like an independent text filter, focusing only on text structure and density features. It's more of a generic text filter than a targeted tool for detecting adversarial samples specific to LLMs. I don't mean to suggest that the detection method has no value, but most LLM-focused adversarial attacks today generate adversarial samples that closely resemble benign samples in terms of structure, density, and even embedding similarity. This would likely limit CurvaLID's effectiveness against subtle adversarial prompts.\n\n3 There are also issues in the experimental design. CurvaLID was trained and evaluated on the same datasets—Orca, MMLU, AlpacaEval, and TQA—which could inevitably lead to overly optimistic results.\n\n4 I noticed that the manuscript mixes American and British spellings in some places (such as \"defence\" and \"defense\"). To maintain a professional and consistent tone throughout, I recommend choosing one spelling style—American or British—and using it uniformly across the article.\n\n5 The formatting of tables is inconsistent. Some tables use a three-line format, while others do not. A unified table style would improve clarity and presentation quality." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "CurvaLID is a unified algorithm that detects adversarial prompts in LLMs using Local Intrinsic Dimensionality and curvature, achieving consistent and over 0.99 accuracy across various models." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024curvalid,\ntitle={{CURVALID}: A Geometrically-guided Adversarial Prompt Detection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v1qNr99R5n},\nnote={under review}\n}" }, "abstract": { "value": "Adversarial prompts that can jailbreak large language models (LLMs) and lead to undesirable behaviours pose a significant challenge to the safe deployment of LLMs. Existing defenses, such as input perturbation and adversarial training, depend on activating LLMs' defense mechanisms or fine-tuning LLMs individually, resulting in inconsistent performance across different prompts and LLMs. To address this, we propose CurvaLID, an algorithm that classifies benign and adversarial prompts by leveraging two complementary geometric measures: Local Intrinsic Dimensionality (LID) and curvature. LID provides an analysis of geometric differences at the prompt level, while curvature captures the degree of curvature in the manifolds and the semantic shifts at the word level. Together, these tools capture both prompt-level and word-level geometric properties, enhancing adversarial prompt detection. We demonstrate the limitations of using token-level LID, as applied in previous work, for capturing the geometric properties of text prompts. To address this, we propose PromptLID to calculate LID in prompt-level representations to explore the adversarial local subspace for detection. Additionally, we propose TextCurv to further analyze the local geometric structure of prompt manifolds by calculating the curvature in text prompts. CurvaLID achieves over 0.99 detection accuracy, effectively reducing the attack success rate of advanced adversarial prompts to zero or nearly zero. Importantly, CurvaLID provides a unified detection framework across different adversarial prompts and LLMs, as it achieves consistent performance regardless of the specific LLM targeted." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large language models", "Adversarial attacks", "Local Intrinsic Dimension", "Curvature" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/05a9568ddf21e7f58c2253821467b1bdd2bb24cb.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "CURVALID: A Geometrically-guided Adversarial Prompt Detection" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v1rFkElnIn
Decoupled Subgraph Federated Learning
main
Active
Federated Learning;Subgraph Federated Learning;Inter-Connected Graphs;GNN;Decoupled GCN
learning on graphs and other geometries & topologies
5;6;6
4;4;4
2;3;3
2;3;3
2;4;2
5.666667
4
2.666667
2.666667
2.666667
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "As in weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Subgraph FL with inter-connections is an important topic.\n2. Completing the missing L-hop features by learning a L-hop node structure embedding is an interesting idea.\n3. Experiments show the performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper works on subgraph FL for node classification, where inter-connections between different clients is important. \n\nIt first computes a global L-hop neighborhood matrix before training. During training, it uses GNN for node feature embedding and use L-hop matrix multiplying a trainable matrix to calculate node structure embedding. Both embeddings are concatenated to get the final prediction result. Experiments show the performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Privacy leakage. Before training, clients communicate to calculate the L-hop neighborhood matrix $\\hat{A}$. In the 2-hop case, since the client knows 1-hop neighbors and the information during the communication, it is still able to reconstruct the 2-hop graph. Pruning cannot guarantee the privacy.\n2. FedSage+ and FedGCN can outperform FedStruct.\n3. In FedGCN, the server does not require a global adjacency matrix for homomorphic encryption. It only needs to know the node ids for encrypted aggregation and identify which nodes belong to each client for sending the aggregation result back." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the weakness." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1)\tThis paper studies a significant and interesting problem, and the method can be used in a wide range of real-world applications. \n2)\tThe paper is overall well motivated. The proposed model is reasonable and sound. Theoretical analysis is performed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel framework, FEDSTRUCT, to tackle the challenge of federated learning on graph-structured data distributed across multiple clients, particularly in scenarios involving interconnected subgraphs, it utilizes explicit global graph structure information to capture inter-node dependencies. The effectiveness of FEDSTRUCT is validated through extensive experiments on six datasets for semi-supervised node classification, demonstrating performance that approaches that of centralized methods across various scenarios, including different data partitioning strategies, levels of label availability, and numbers of clients." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) The abstract lacks a description of the background. I recommend briefly outlining the context of the issues addressed in this paper before elaborating on the key problems that are solved.\n\n2) Figure 1 has not been cited and its placement is too early in the text; please adjust this detail. Additionally, Figure 2 is unclear; I recommend adjusting the proportions or border thickness of each subfigure.\n\n3) In the Related Work section, you mention that FED-STAR shares structural knowledge, yet in the conclusion, you state, \"No work has leveraged explicit structural information in SFL.\" Are \"structural knowledge\" and \"structural information\" the same concept? Please provide more clarification in the conclusion.\n\n4) The formula following (1) is missing a comma; please check for similar issues throughout the paper.\n\n5) Privacy is one of the directions addressed in this paper, yet most references are to other works. I suggest including some original proofs or experiments related to privacy to enhance the completeness of the article." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "see weaknesses.\n\nIf the first questions can be well explained, the rating should be higher." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed method is novel, utilizing augmented explicit structure which can be regarded as global knowledge to promote the performance of the SFL model. \n2. Utilizing pruning skills decrease the calculation complexity and communication costs.\n3. Well written and well formulated problem." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a novel SFL method called FEDSTRUCT, which leverages the augmented explicit structure $\\bar{A}$ to promote the SFL model performance. Moreover, they propose HOP2VEC to learn local structure embedding. FEDSTRUCT precalculates the $\\bar{A}$ with privacy protection and prunes the $\\hat{A}$ matrix to decrease the calculation complexity and communication costs, thus balancing the communication-privacy-accuracy trilemma." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Focus on Privacy**. How to obstain the local L-hop combined adjacency matrix while not share the L-hop global Adjacency Matrix maybe play the core role in FEDSTRUCT. In APP D, the equations [30] [31], [32], what does the $\\hat{A}^{[K]}_j$ mean? Should be $\\hat{A}^{[k]}_j$? If so , the next question, for client $i$, how does it know all $\\hat{A}^{[k]}_j$ for $ k \\in [K]$ without sharing the global adjacency matrix in all clients. So another question when computing, the $\\tilde{A}^{[i]}_j \\in \\mathbb{R}^{|\\tilde{V}_i| \\times |{V}_j|}$, the same to $\\hat{A}^{[i]}_j$? So $\\hat{A}^{[i]}_k \\times \\hat{A}^{[k]}_j$ should be $\\mathbb{R}^{|\\tilde{V}_k| \\times |{V}_k|} \\times \\mathbb{R}^{|\\tilde{V}_k| \\times |{V}_j|}$, but according the definition before, the $|\\tilde{V}_k| \\neq |{V}_k| $, how does the computation continue? Maybe I miss something? I really hope you can explain it for me to understand the feasibility of FEDSTRUCT. That's my main concern about this paper.\n\n2. **About the hyperparameters.** **1)** The analysis of $\\beta$ is not enough, an essential parameter in FEDSTRUCT for various homophilic and heterophilic graphs, which directly dominate the performance and affect the judgment of FEDSTRUCT's contributions. **2)** It is so strange that the parameter $L_s$ and $L$ is set to1 for heterophilic graph chameleon in table 5. As the author states in lines 297-299, a heterophilic graph should own multi-hop nodes and a high-frequency filter to augment the local graph representation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024decoupled,\ntitle={Decoupled Subgraph Federated Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v1rFkElnIn},\nnote={under review}\n}" }, "abstract": { "value": "We address the challenge of federated learning on graph-structured data distributed across multiple clients. Specifically, we focus on the prevalent scenario of interconnected subgraphs, where inter-connections between different clients play a critical role. We present a novel framework for this scenario, named FedStruct, that harnesses deep structural dependencies. To uphold privacy, unlike existing methods, FedStruct eliminates the necessity of sharing or generating sensitive node features or embeddings among clients. Instead, it leverages explicit global graph structure information to capture inter-node dependencies. We validate the effectiveness of FedStruct through experimental results conducted on six datasets for semi-supervised node classification, showcasing performance close to the centralized approach across various scenarios, including different data partitioning methods, varying levels of label availability, and number of clients." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Federated Learning", "Subgraph Federated Learning", "Inter-Connected Graphs", "GNN", "Decoupled GCN" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b3b1d0dc226f6dd7ac30eeb7faae000a011fa159.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Decoupled Subgraph Federated Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v27yHgKtMv
Calibration of ordinal regression networks
main
Withdraw
Ordinal regression;Calibration;Deep neural networks;Unimodality;Loss function;Soft ordinal encoding;Label smoothing;Order-aware calibration
unsupervised, self-supervised, semi-supervised, and supervised representation learning
Daehwan Kim;Haejun Chung;Ikbeom Jang
~Daehwan_Kim4;~Haejun_Chung1;~Ikbeom_Jang1
3;3;5;5
5;4;4;5
1;2;3;2
2;2;2;2
1;2;2;2
4
4.5
2
2
1.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "We would like to thank our sincere gratitude to the reviewers and Area Chairs for their invaluable feedback and insights. After careful consideration, we have decided to withdraw our submission to further develop and refine our paper based on these constructive comments. Thank you once again for your time and thoughtful evaluation." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Many things in the paper are not defined properly. Please clearly specify the definitions of calibration and modularity. \n\n2. Why MAE is not used as a metric in the experiments, as all the loss functions used are surrogates for it. It would be interesting to see comparison results on MAE.\n\n3. Please explain the regularization term in detail as it is still unclear how it promotes unimodularity. A graphical explanation will also help.\n\n4. A theoretical analysis of calibration and unimodularity of the proposed approach is missing. \n\n5. Is the proposed approach rank consistent? If not, then how frequently it violates ranking of thresholds. An experimental study on it will be helpful.\n\n6. How does the performance of the model vary with the variation in thevalue of $t$ used in the regularization term." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. a novel regularization term is used to promote unimodularity.\n2. Paper is well written and easy to read." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, authors propose an approach for calibration of ordinal regression. They propose a loss function that introduces order-aware calibration They use soft ordinal encoding and label-smoothing-based regularization to enforce both calibration and unimodality. To show the efficiency of the proposed approach, authors propose extensive experimental results on benchmark datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Theoretical proofs of calibration and unimodularity are missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. In equation 3, we generally adopt a hyperparameter $\\lambda$ to balance the loss and the regularization, like $L = L_1 + \\lambda L_2$. Could you explain why this method does not require such a hyperparameter? And why $t$ can control the strength of this regularization?\n\n2. Could you explain why SCE and ACE are improved a lot, but ECE is not?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The problem of calibration in the context of ordinal regression sounds novel and important. As far as I know, this work should be the first work to solve this issue.\n\n2. The improvement is significant empirically. From Table 2, we can observe a great improvement in the calibration of ordinal regression models and the classification accuracy is preserved." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors aim to enhance the confidence calibration of ordinal regression in the training stage. The main challenge of this task is to consider the calibration and unimodality together. To address this challenge, they propose a new loss function for ordinal regression, which combines order-aware calibration with a unimodal regularization term (based on the SORD encoding). In particular, their method enforces both calibration and unimodality by explicitly modeling the ordinal relationships between classes. The effectiveness of their method is validated on three public datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The L_{REG} defined in Equation 2 is not clearly explained. In particular, the design of I(r) is hard to understand for readers. It would be better if the authors could elaborate on how the regularization is constructed.\n\n2. The writing of the gradient analysis in Subsection 3.4 is not clear. The authors may need to improve the writing in this part, or it might be too challenging for readers to follow.\n\n3. The technical novelty of the proposed method is not presented. While the authors claim that the method considers the unimodality compared to current calibration methods and considers the calibration when compared to current Ordinal losses, I am not clear about if this method is newly designed in each aspect. In other words, the authors may need to show the new insight of calibration part compared to calibration methods.\n\ntypos:\n1. Line 49, Oridnal -> Ordinal.\n\nThe major issue of this work is on the writing: readers cannot easily understand why we should design such a regularization and how t works here. I will improve my score if the authors can make it clear in the revised version." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "•\tThe motivation behind the new method is well-articulated, clearly highlighting the limitations of traditional cross-entropy (CE) loss in ordinal tasks and the miscalibration in modern ordinal regression models.\n•\tThe proposed method is straightforward, and easy to implement." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The proposed method introduces ORCU, a novel loss function designed to ensure calibration and unimodality in ordinal regression tasks. ORCU leverages soft ordinal encoding and order-aware regularization to produce calibrated and unimodal probability distributions, which are particularly valuable in high-stakes applications requiring reliable confidence estimates and accurate predictions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tIt is unclear how the regularization component of ORCU promotes calibration. Lines 240-252 discuss scenarios where the model is under or overconfident, yet this confidence is based on a soft-encoded distribution not directly related to the data, which raises questions about its reflection of \"real\" confidence. Additionally, I would appreciate a more rigorous explanation of how this regularization approach aligns with the standard mathematical definition of calibration. Could the authors provide a clearer mathematical justification for this relationship?\n\n2.\tThe paper claims that CE loss leads to overconfident predictions, yet the reliability diagrams presented indicate underconfident outcomes in the experiments, seemingly contradicting this claim. Temperature scaling, a prominent calibration technique, relies on CE, further challenging the assertion that CE is fundamentally flawed for calibration. Could the authors address this discrepancy and clarify why their results show underconfidence in CE where overconfidence might be expected? Additionally, a nuanced discussion of CE’s strengths and limitations for calibration, especially in light of techniques like temperature scaling, would be valuable.\n\n3.\tThe evaluation is limited to loss function baselines. Including additional non-loss-based methods for ordinal regression, such as the approach presented in https://arxiv.org/pdf/2303.04547 for unimodality could highlight the unique benefits of ORCU more effectively. I recommend incorporating a discussion on why ORCU and loss function-based methods may offer advantages over such approaches.\n\n4.\tThe experiments were conducted on only three datasets, which limits the scope for evaluating the method’s robustness across a wider range of ordinal regression tasks. Incorporating a more extensive dataset selection would allow for a better assessment of the generalizability of the approach." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Please address the points I raised in the weaknesses part\n- The methods that are not based on CE loss - like optimal transport loss - how are the limitations applied to them? \n- Missing definition of z_{n.k} and intermediate step to get f’=y - p, also r is not defined before it presented \n- Can you elaborate please why the gradient encourages the model to distribute probability across adjacent classes ( as you claim in the sentence in Line 176) compared to the statement in Lines 147-149" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- This work addresses an important overconfidence issue in ordinal regression tasks\n- The proposed loss function is assumed to address both accuracy and confidence of the cross entropy loss based model during optimization without additional post-training calibration\n- The authors justify the unimodality enforcement of the proposed loss by gradient analysis" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors proposed an addition to the cross entropy loss term that corrects overconfident predictions for incorrect labels. The advantage of the proposed loss is that the calibration is done jointly with the accurate prediction learning during the training optimization and doesn’t require additional post-training steps. The method was evaluated on the 3 datasets and compared against CE-based ordinal models and calibration-loss based and showed an improvement in calibration/accuracy metrics." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Major**:\n- The main focus of the work is CE-loss based ordinal regression which is not an optimal loss for this task and several methods were proposed without CE loss: [1-4]\n- The motivation in Sec 3.1 is unclear, how the calibration is defined and why it is not implied by CE-loss. The discussion seems to be valid for the ordered nature of classes but not for calibration. It is better to discuss the motivation for each problem separately.\n - $\\mathcal{L}_{SCE}$ - the explanation in L175-177 is unclear, how the defined loss encourages what the authors claim - maybe it is explained by Diaz et. al but the manuscript should be self contained with additional clarification. It is also not clear how it helps to reflect the ordinal relationships.\n- The Sec. 3.3 in unclear, the explanation and derivation of the loss formula should come before presenting the loss term \n - why the authors choose it\n - how it helps to ensure calibration\n - what is r ?\n - The L181-183 is unclear.\n\n- Weak evaluation with only 3 small-sample datasets - overall the improvement is incremental so presenting results on more datasets could be beneficial.\n- Missing unimodality evaluation - the authors claim the model enforces unimodality - please show it in the results as in [5]\n- Missing additional deep ordinal regression baselines [1-5].\n- Sec. 3.4 - while it is clear why the loss term enforces unimodality, I’m not sure how it enforces calibration. By saying “*by increasing the gradient for such incorrect predictions, the model is able to reduce the predicted probability for the incorrect class more effectively*” you can claim the same for the standard CE loss. \n- while it could be seen from the results that calibration metrics improved, I’m not sure it is clear from the manuscript why it works.\n\n**Minor**:\n- Missing additional deep ordinal regression methods in the related work discussion\n- It is better to put Figure 1 closer to the gradient analysis section to make it easier to follow\n\nReferences:\n[1] Liu, X., et al. (2019a). Unimodal-uniform constrained wasserstein training for medical diagnosis. In Proceedings of the IEEE International Conference on Computer Vision Workshops\n\n[2] Beckham, C. et al. (2017). Unimodal probability distributions for deep ordinal classification.\n\n[3] Wenzhi Cao et. al (2020). Rank Consistent Ordinal Regression for Neural Networks with Application to Age Estimation. Pattern Recognition\n\n[4] Xintong Shi et al. (2021). Deep Neural Networks for Rank-Consistent Ordinal Regression Based On Conditional Probabilities.\n\n[5] Cardoso, J. S. et. al (2023). Unimodal distributions for ordinal regression" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a loss function that introduces order-aware calibration in ordinal regression tasks, combining soft ordinal encoding and label-smoothing-based regularization to enforce both calibration and unimodality." }, "_bibtex": { "value": "@misc{\nkim2024calibration,\ntitle={Calibration of ordinal regression networks},\nauthor={Daehwan Kim and Haejun Chung and Ikbeom Jang},\nyear={2024},\nurl={https://openreview.net/forum?id=v27yHgKtMv}\n}" }, "abstract": { "value": "Recent studies have shown that deep neural networks are not well-calibrated and produce over-confident predictions.\nThe miscalibration issue primarily stems from the minimization of cross-entropy, which aims to align predicted softmax probabilities with one-hot labels. In ordinal regression tasks, this problem is compounded by an additional challenge: the expectation that softmax probabilities should exhibit unimodal distribution is not met with cross-entropy. Rather, the ordinal regression literature has focused on unimodality and overlooked calibration. To address these issues, we propose a novel loss function that introduces order-aware calibration, ensuring that prediction confidence adheres to ordinal relationships between classes. It incorporates soft ordinal encoding and label-smoothing-based regularization to enforce both calibration and unimodality. Extensive experiments across three popular ordinal regression benchmarks demonstrate that our approach achieves state-of-the-art calibration without compromising accuracy." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Daehwan_Kim4", "~Haejun_Chung1", "~Ikbeom_Jang1" ] }, "authors": { "value": [ "Daehwan Kim", "Haejun Chung", "Ikbeom Jang" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Ordinal regression", "Calibration", "Deep neural networks", "Unimodality", "Loss function", "Soft ordinal encoding", "Label smoothing", "Order-aware calibration" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "kim|calibration_of_ordinal_regression_networks" }, "pdf": { "value": "/pdf/68595e0558e86c2f45ec3d79e27ab7eb06ef8403.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/1442042c4f8b5c9a9fddfdf0db7ec9f12fc8d9d1.zip" }, "title": { "value": "Calibration of ordinal regression networks" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v2D1ASk5MT
Proposer-Agent-Evaluator (PAE): Autonomous Skill Discovery For Foundation Model Internet Agents
main
Active
VLM Agent;Web/GUI Agent;VLM;Reinforcement Learning;Skill Discovery
foundation or frontier models, including LLMs
3;3;5;5;8
4;4;3;4;4
2;2;3;2;3
1;2;2;2;4
3;2;3;2;4
4.8
3.8
2.4
2.2
2.8
-0.054554
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Will the author build a large-scale dataset that can be released to the public?\nThe author mentioned the model only generates sparse rewards, will the batch RL be affected since only parts of the data have rewards?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The experiments are extensive. This paper compares with many SOTA baselines and makes many useful analyses. It shows the effectiveness of the proposed framework. \n2. This paper is easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a framework, Proposer-Agent-Evaluator(PAE), to boost the performance of agents in web tasks. In more detail, it uses a foundation model to automatically generate task instructions and another model to evaluate task completion and generate a reward. Lastly, RL is used to train a policy model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The main weakness is still the novelty. The method can be divided into two parts. Utilize existing foundation models to process data or automatically generate data. This part is not novel. Many other works have tried similar pipelines to collect data. The second part is using RL to fine-tune a model, which is also a standard procedure. \n\n\n[1] Navigating the Digital World as Humans Do.UNIVERSAL VISUAL GROUNDING FOR GUI AGENTS" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- In PAE, why sample tasks uniformly rather than prioritizing by difficulty or learning progress?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Demonstrates an increase in performance for open-source model weights compared to previous open-source models.\n- PAE framework leverages self-generated tasks, avoiding static human-annotated instructions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents Proposer-Agent-Evaluator (PAE), a framework enabling autonomous skill discovery for vision-language model (VLM) internet agents. By employing a context-aware task proposer and an autonomous evaluator, PAE allows VLM agents to independently explore and refine skills for web navigation tasks, demonstrating improvements in success rates on benchmarks like WebVoyager and WebArena Easy compared to other open-source models. However, the model still relies on closed-source VLMs for effective task generation and evaluation, thus partially constraining the proposed framework’s open-source utility." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The term \"model-based task proposer\" is used ambiguously. In reinforcement learning (RL), \"model-based\" generally implies use of a dynamics model for planning, whereas in this paper it refers to LLM prompting, which can lead to confusion in terminology.\n- The novelty and motivation of the contributions are limited. Although the paper claims state-of-the-art (SoTA) for open-source models, it requires closed-source models for generating and evaluating tasks. This reliance on closed-source models highlights that the bottleneck remains the quality of these closed-source models, rather than any specific methodological improvement introduced by PAE.\n- The paper makes an assumption that the ground-truth task distribution and reward functions are inaccessible, even though in simulated environments, these can often be directly obtained (e.g., verifying if a button was clicked). If the goal is to improve policy generalization skills, and use ground-truth distributions and rewards only as a way to evaluate the method, the paper would benefit from a clearer rationale and explanation for this assumption (Section 3.1).\n- The paper lacks references to relevant prior work in autonomous task proposal and RL, and some claims of novelty are inaccurate, given that similar approaches have been explored [1, 2, 3, 4].\n\n[1] Zhang, J., Zhang, J., Pertsch, K., Liu, Z., Ren, X., Chang, M., ... & Lim, J. J. (2023). Bootstrap your own skills: Learning to solve new tasks with large language model guidance. arXiv preprint arXiv:2310.10021.\n[2] Zhang, J., Lehman, J., Stanley, K., & Clune, J. (2023). Omni: Open-endedness via models of human notions of interestingness. arXiv preprint arXiv:2306.01711.\n[3] Faldor, M., Zhang, J., Cully, A., & Clune, J. (2024). OMNI-EPIC: Open-endedness via Models of human Notions of Interestingness with Environments Programmed in Code. arXiv preprint arXiv:2405.15568.\n[4] Colas, C., Teodorescu, L., Oudeyer, P. Y., Yuan, X., & Côté, M. A. (2023, November). Augmenting autotelic agents with large language models. In Conference on Lifelong Learning Agents (pp. 205-226). PMLR." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How does PAE differ from other methods for skill discovery via foundation models such as OMNI?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper demonstrates several notable strengths. The authors conducted comprehensive evaluations of their PAE framework across multiple challenging benchmarks, including WebVoyager and WebArena. They compared their method against a range of baselines, including proprietary models, state-of-the-art open-source vision-language models, and supervised fine-tuning approaches. This thorough evaluation provides a clear picture of PAE's performance in relation to existing methods.\n\nThe paper also presents a detailed analysis of the results, including error analysis and generalization studies. The authors break down different types of errors made by the models, providing insights into where improvements are made and what challenges remain. They also examine how well the skills learned through PAE transfer to unseen websites, demonstrating the framework's ability to develop general web browsing capabilities. Additionally, the authors investigate how PAE scales with larger base models and explore the impact of different context information on performance. This level of analysis and discussion significantly strengthens the paper by providing a nuanced understanding of the method's capabilities and limitations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Proposer-Agent-Evaluator (PAE), a novel framework for autonomous skill discovery in foundation model agents, particularly focusing on Internet agents. PAE consists of three key components: a context-aware task proposer, an agent policy, and an autonomous evaluator. The framework enables agents to autonomously discover and practice skills without human supervision, potentially leading to a more diverse and scalable skill repertoire. The authors validate PAE on challenging vision-based web navigation tasks using both real-world and self-hosted websites. Results show that PAE significantly improves the zero-shot generalization capability of VLM Internet agents. Notably, the PAE-trained model outperforms other state-of-the-art open-source VLM agents. The authors claim this work represents the first attempt to apply autonomous task proposal with reinforcement learning for agents, achieving state-of-the-art performance among open-source models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I am a little unclear about the novelty of the actual algorithm underlying PAE in comparison to something like OMNI (https://arxiv.org/pdf/2306.01711) and subsequent work. I think applying this kind of open-ended task selection for skill discovery to the internet agent case is definitely unique, but how does PAE differ from other open-ended skill discovery algorithms in other RL domains? If the authors could clarify this either in the related work section or after introducing their method in section 3.3 that would be really helpful.\n\nOverall the results of the paper are strong, but framing the algorithmic novelty can be done with more clarity. If the novelty arises from the domain then this point and the difficulties of internet-based environments should be emphasized more rather than as just a potential application case as section 4 treats it as." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Response to the questions and concerns mentioned in the Weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well written and easy to follow. The problem of foundation model agents and the PAE method is formally defined and clearly illustrated with figures. The idea of the Proposer-Agent-Evaluator framework seems reasonable in that the proposer should approximate the real-world task distribution and the agent learns to maximize the autonomous reward function.\n2. Extensive and systematic experiments are carried out to verify the effectiveness of PAE compared with untrained VLMs and SFT training. Error analysis and the study of context complexity also show that the context-aware method can discover low-level web skills." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new framework called Proposer-Agent-Evaluator (PAE) which allows foundation model agents to autonomously discover and practice skills in the wild. The framework consists of a context-aware task proposer that suggests tasks for the agent to practice with website context information, an agent policy that attempts these tasks in the real world, and an autonomous model-based success evaluator that evaluates the resulting trajectories. The success evaluation serves as the reward signal for the agent to refine its policies. The authors validate PAE on challenging vision-based web navigation using real-world and self-hosted websites and demonstrate significant improvements in the zero-shot generalization capability of VLM Internet agents compared to state-of-the-art open-source VLM agents." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the PAE framework seems reasonable, the method itself is naive and lacks novelty. Basically, the authors train a VLM with data containing instructions and successful trajectories relying on the proprietary VLMs, which is a common approach and does not include inspirational techniques. Although human annotation may overlook some long-tail real-world tasks, it's unclear why VLMs provided with web context can approximate the real-world task distribution. Also, behavior cloning the successful trajectories labeled by a VLM should not be described as reinforcement learning (line 117).\n 2. It's not explained why behavior cloning is adopted instead of other training methods including SFT to discover web skills. Is the LLaVa-SFT baseline trained on the same trajectories as the proposed PAE model? What are the performances of other leading-edge VLMs such as GPT-4o? Comparisons with other trained web agents on Web Voyager or Web Arena are also missing.\n3. Several typos in line 91 ('a foundation model agents') and line 191 (task distribution R)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "Do you have the right permissions in place to sample from the user demos on the websites (Section 4.2 for the context aware task proposer)" }, "flag_for_ethics_review": { "value": [ "Yes, Legal compliance (e.g., GDPR, copyright, terms of use)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Clarification Questions:\n\n-\tFor all components, the task proposer, autonomous evaluator and agent policy, could you elaborate more on the choice of VLMs: Claude-3-Sonnet and the LLaVa models? Why those and not others?\n-\tFor evaluation, could you elaborate on your choice of open-source VLMs? Why are they chosen as opposed to others?\n-\tIn the main results, under the scaling paragraph, line 400, I’m not quite clear on the following statement “Again, LLaVa-34B PAE beats LLaVa-7B SFT on 12 out of 13..”. Is that the intended comparison, or did you mean to compare across PAE models only?\n-\tIn the main results, under the generalization paragraph, line 405, why is LLaVa-7B PAE highlighted, when actually you present results for the larger LLaVa-34B PAE as well in Table 3? Also in Table 3, why did you drop some of the other models from Tables 1 and 2 (Claude 3.5, InternVL and vanilla LLaVa-7/34B)?\n-\tIn the alignment with human judgements, could you elaborate on how many human annotators were included in the user study and if possible how many hours they allocated to the task?\n\nMinor comments/suggestions:\n\n-\tThere are a few minor typos: on line 91 “for foundation model agents” (remove “a”), on line 191 in section 3.2, task distribution should be C instead of R, on line 485 “LLaVa-7B SFT knows that”\n-\tFor consistency across the entire paper, use Claude-3-Sonnet – on line 269 in section 4.3, it is referred to as Claude-Sonnet-3.\n-\tLine 317 the LLaVa model name needs to be in bold to align with the others." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The author(s) highlight and prove the feasibility of using much smaller VLMs and thus significantly lowering the test-time compute through their method, as opposed to finetuning larger models.\n\n- The author(s) are willing to opensource the code and models to encourage further development in the research community.\n\n- Overall, the paper is easy follow and clearly motivates the research question addressed and the main contribution in relation to existing literature. Concepts introduced in the main paper nicely point to corresponding appendix sections when needed which provide .\n\n- The author(s) are doing a great job explaining how their work differentiates itself from existing research along 3 separate dimensions: foundation model agents, self-generated instructions and unsupervised skill discovery in RL. I particularly appreciate the focus on enabling the community to use much smaller sized models in terms of parameters, and therefore significantly less test-time compute.\n\n- I appreciated the analysis on failure modes in Section 6, highlighting the value added by the PAE method, as opposed to SFT.\n\n - It is very good to see that each action contains chain of thought element, aiding interpretability of the model choices. The author(s) also take preventive measures, as highlighted in their Ethics statements." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a framework for autonomous skill discovery for real-world Internet agents, called PAE (Proposer Agent Evaluator). Compared to existing approaches, this method allows for the agent to collect, explore and refine new skills automatically, without relying on a limited set of predefined human-annotated instructions. The author(s) show how the additional PAE components introduced (the task proposer, the agent policy and the autonomous evaluator) allow for zero-shot SOTA generalization performance, as well as better success rates against opensource VLMs and finetuned opensource VLMs on 2 environments, WebVoyager and WebArena." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- To strengthen the submission/contribution, it would be good to elaborate on why proprietary VLMs outperform all opensource methods.\n\n- It would be great to add some details/discussion on inference times, comparing the opensource VLMs, the SFT variation and PAE.\n\n- It wasn't very clear to me how you define the reward for the agent policy, could you elaborate on that?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024proposeragentevaluator,\ntitle={Proposer-Agent-Evaluator ({PAE}): Autonomous Skill Discovery For Foundation Model Internet Agents},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v2D1ASk5MT},\nnote={under review}\n}" }, "abstract": { "value": "The vision of a broadly capable and goal-directed agent, such as an Internet-browsing agent in the digital world and a household humanoid in the physical world, has rapidly advanced, thanks to the generalization capability of foundation models. Such a generalist agent needs to have a large and diverse skill repertoire, such as finding directions between two travel locations and buying specific items from the Internet. If each skill needs to be specified manually through a fixed set of human-annotated instructions, the agent's skill repertoire will necessarily be limited due to the quantity and diversity of human-annotated instructions. In this work, we address this challenge by introducing Proposer-Agent-Evaluator (PAE), a novel framework that enables foundation model agents to autonomously discover and practice skills in the wild. At the heart of PAE is a context-aware task proposer that autonomously proposes tasks for the agent to practice with context information of the websites such as user demos or even just the name of the website itself. Then, the agent policy attempts those tasks in the real world with resulting trajectories evaluated by an autonomous model-based success evaluator. The success evaluation serves as the reward signal for the agent to refine its policies through RL. We validate PAE on challenging vision-based web navigation, using both real-world and self-hosted websites from WebVoyager and WebArena. Our results show that PAE significantly improves the zero-shot generalization capability of VLM Internet agents (more than 30\\% relative improvement) to both unseen tasks and websites. Our model also achieves an absolute advantage of over 10\\% (from 22.6\\% to 33.0\\%) comparing to other state-of-the-art open source VLM agents including Qwen2VL-72B. To the best of our knowledge, this work represents the first attempt to apply autonomous task proposal with RL for agents, achieving SOTA performance among open-source models. We plan to release our models and code to facilitate further research." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "VLM Agent", "Web/GUI Agent", "VLM", "Reinforcement Learning", "Skill Discovery" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0aabb0e488a607064496d67722adb71ad368919a.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/48560904762adc22d5ec38b5486cf459a69bf043.pdf" }, "title": { "value": "Proposer-Agent-Evaluator (PAE): Autonomous Skill Discovery For Foundation Model Internet Agents" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v2NuTf6Kww
Network-based Active Inference for Adaptive and Cost-efficient Real-World Applications: PV Panel Inspection
main
Active
Active Inference (AIF);Free Energy Principle (FEP);Robotics;Trajectory generation;Random dynamical systems;Random attractor dynamics;Non-Equilibrium Steady State (NESS);Adaptive control;Industrial automation;Computational efficiency;Cost-efficient solutions
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
1;1;3;3;5;5
2;4;3;4;3;5
2;1;2;1;2;2
1;1;1;1;2;2
1;2;1;2;2;2
3
3.5
1.666667
1.333333
1.666667
0.426401
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- In section 1.3, what is the definition of \"surprise\"?\n- It is stated in Section 1.3 that DRL requires \"fixed environments\", which I believe is not true. Could you please clarify this?\n- The equation in page \"v\" is vague" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The NetAIF framework, especially the application of random pullback attractor to it is interesting" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes NetAIF, a network-based adaptive inference framework, based on random attractor dynamics and Free Energy Principles (FEP), to improve adaptive control problems in robotic systems, and PV panel inspection. The experimental results show that NetAIF performs well on a PV panel inspection and a robotic tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The introduction contains several statements and justifications without clear evidence or relevant citation.\n- The number of references used in this study is low (only 21), several of which are not peer-reviewed.\n- There is a rich body of literature that supports using (deep)RL for similar applications. The application of these approaches is not studied in detail. \n- I expected the authors to use SOTA (deep)RL and AIF approaches as baselines for comparison with NetAIF. Therefore, the experiment section is limited\n- The paper lacks adequate background and theory about different components of the method such as AIF, FEP, RDS, and random attractor dynamics.\n- The equation numbers are missing\n- Section 2 lacks effective connections between its subsections, which results in disjointed and potentially misleading descriptions. This section should be thoroughly revised to improve readability and flow. Additionally, the relationship between the equations and Algorithm 1 is unclear." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The re-use of considerable parts of text between the two papers may be problematic; it may be worth checking it.\n\nThere seems to be strong self-plagiarism." }, "flag_for_ethics_review": { "value": [ "Yes, Research integrity issues (e.g., plagiarism, dual submission)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Can the authors please elaborate on the differences between this paper and the connected one?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- If verified thoroughly, the reduced computational requirements and real-time capabilities are promising.\n- The method is designed for practical applications, which is nice." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes Network-based Active Inference (NetAIF), a framework for trajectory generation and control in robotics. NetAIF integrates random attractor dynamics and the Free Energy Principle (FEP) to create a system that can adapt in real-time with low computational requirements. The authors focus on a practical application in PV panel inspections, demonstrating the system's efficiency, adaptability, and robustness. Evaluation is performed in a simulated Mujoco environment, and on a very simple task on a real robot." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The implementation details are not very clear; from what I understand, the method does not seem particularly novel.\n\n- Experimental Validation: The evaluation is limited to simulations and controlled lab conditions, which makes the results not very convincing. In particular, no baseline is provided.\n\n- Figures 1 and 2 are not particularly informative, nor are their captions (e.g., for figure 1, \"parameters that determine the network\nstructure such as number of layers, strides were determined through hyper parameter search\").\n- Table 4 seems overkill to report the results; the values would be more appropriate just mentioned in the text (as they are) and in the caption of figure 11.\n\n- Minor: is the template correct? This is the only one among the ICLR papers I reviewed that used roman numerals for the page numbers.\n- Minor: reporting network sizes in bytes is not very useful, and it's the first time I see it in a paper. Reporting the number of parameters is better.\n\n- Possibly major: The authors cite a paper from the same group in concurrent submission to ICLR ( https://openreview.net/pdf?id=Hm7RYDspQP ); the two papers seem to have major overlap (including using many of the same figures and whole sections / parts of the text). It feels like the authors have tried to write two papers on the same method, splitting evaluations and applications." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- What do you mean with \"Unlike DRL, which requires fixed environments,\". DRL methods can generalize quite well to unstructured unseen environments, see for instance https://arxiv.org/abs/2109.11978, https://arxiv.org/pdf/2010.11251\n\n- The paper does not address how the model would adapt to obstacles in the environment. How does the current approach account for or avoid collisions?\n\n- The statement “the random attractor dynamically explores within the constraints set by the control law” raises some questions:\n - How are these constraints practically defined?\n - What control law ensure that exploration remains controlled and the robot behaves predictably during this process?\n\n- The pseudo algorithm suggests that prediction errors are minimized by randomly sampling new weights, with updates only occurring if the input surpasses a certain threshold. A few points to clarify:\n - How is this threshold determined?\n - Why is this threshold-based sampling considered an optimal method for weight updates?\n - What does it mean to reset a weight? Is it reset to a default value? If so, which value?\n\n - “The joint pose was directly fed into the system, and the attractor calculated waypoints for a smooth and efficient trajectory to the specified pose,”:\n - What specifically is the simple control law—is it a linear law?\n - How are waypoints computed? Is it a linear interpolation between current and target joint angles?\n - Why is there not a single goal-based attractor, and does the choice affect stability?\n\n- The text states that “noise affecting the system at time t is related to past noise.” Does this formulation assume a colored noise model?\n\n- Based on Table I, the robotic arm with 6 DOF has a position accuracy of 14 mm from a fixed base. While this may suffice for inspecting large panels, it is insufficient for precision tasks, like assembly, which require sub-millimeter accuracy." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The authors propose a novel robot control algorithm, and evaluate both in simulation and on a real robot arm." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents \"Network-based Active Inference\" (NetAIF), a novel framework for trajectory generation and control in robotics. The main novelty is replacing traditional activation functions with a discrete weight-assigning mechanism, especially for a system focused on achieving and maintaining stability within a NESS framework. The authors mainly focus on a photovoltaic panel inspection task, where a robot arm needs to reach a pre-determined distance from a panel, perpendicular to the panel's surface. The experiments evaluate error and planning time of the system both in simulation and on a real robot arm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper focuses on the particular task of PV panel inspection in title and abstract. However, the actual task considered is controlling a robotic arm to a particular end-effector pose and orientation. I would think there are many other approaches to do PV panel inspection, and wouldn't necessarily require a robot arm (e.g. using a drone for instance). Also the actual \"visual inspection\" is not addressed. \n\n- The introduction discusses deep RL systems, but the paper does no comparison against any of those, but refers to another paper in the conclusion.\n\n- The methodology section lacks detail, making it challenging to reproduce the work. Comparative analysis with traditional control methods in particular for visual servoing, which are well-established for inspection tasks, is also absent.\n\n- The paper lacks a formal proof of system stability. To verify stability, the authors might consider using Lyapunov’s stability criteria. Identifying a well-defined Lyapunov function could reveal whether trajectories converge to an attractor, A(ω), suggesting stability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "see questions in weaknesses" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper proposes an alternative to reinforcement learning by adopting the Active Inference framework from cognitive neuroscience, in which perception, action, and learning are obtained through the minimization of variational free energy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces NetAIF, a method that integrates random attractor dynamics and the Free Energy Principle (FEP) to improve trajectory generation and control in robotics. The paper shows the performance of NetAIF in applications of PV Inspection." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Overall, the paper is not well-written and hard to follow. It gives a superficial idea without going through technical details. I do not understand the mathematics behind NetAIF from this paper. More explicit mathematical equations/formulas should be written: what are the explicit formulas of (variational) Free Energy and surprise? What is the structure of networks? Are they multilayer perceptions? What are the parameters and how are they being updated? how are the explicit feedback loops sent to the previous layers?\n\nThe algorithm 1 is not clearly explained and hard to reproduce. Please elaborate more what do each variable represents and how are they obtained/calculated.\n\nThe algorithm 1 used “desired state” which means it required the ground truth of optimal trajectory. Would this not be equivalent to having a reward or a set of demonstrations of experts in Deep reinforcement learning?\n\nExperiments lack benchmarks and comparisons with DRL methods. It is not clear whether the results are good. Are PRM and Hybrid RRT-PRM tested in this experiment?\n\nIt is also not clear how your work is different from “Deep active inference as variational policy gradients (Millidge, 2020)” and “Deep active inference (”K. Ueltzhoffer, 2018”)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "no concerns on ethics" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Grammar and sentence structure : the paper could use a read through and improve upon sentence structure and layout, also several typos, omissions could be rectified\nex. Active Inference (AIF) = Active Inference Framework \nIn my opinion, given the complexity of the proposed solution the paper could benefit from a more rigorous mathematical treatment.While the work uses a novel approach the paper lacks mathematical rigor and this paper relies very heavily on published work forcing the reader to read several papers before extracting value from this work. While citations are needed and not everything needs to be reintroduced, the paper should have enough content to stand on its own. \nMathematical Model Integration: can the authors incorporate mathematical models demonstrating how NetAIF functions in practice ? This could involve equations showing how the network computes trajectories, adjusts weights, or minimizes prediction errors dynamically.\nFormal Optimization Framework: Strongly suggest that authors introduce a formal optimization framework to enhance enhance the credibility of claims regarding efficiency and adaptability. For instance, showing how free energy is minimized using a variational approach or Bayesian inference would connect the theoretical claims with a concrete mathematical foundation.\nSensitivity Analysis and Stability Proofs: This is another weakness of the paper. Suggest authors to include a mathematical analysis of system stability and sensitivity to changes in parameters or inputs. This would validate the robustness of NetAIF and its applicability to real-world scenarios beyond the initial PV panel inspection.\nStability analysis: it would be very beneficial to see a formal treatment on the intentional use of controlled instability and/or some plots in that regard." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Novelty:The use of FEP with random attractor dynamics, for real-time control, is innovative. This combination offers an efficient alternative to Deep Reinforcement Learning (DRL) by minimizing computational requirements and avoids extensive pre-training​.\nReal-world Applicability: The application of NetAIF in PV panel inspections is practical and addresses significant challenges in the clean energy sector. The use of a physical 6-DoF robotic arm for experiments shows a commitment to real-world validation​\nEfficiency: The framework’s low computational footprint and rapid adaptability in dynamic environments are highlighted as key benefits, making it suitable for industries needing quick, cost-effective automation solutions​." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Network-based Active Inference (NetAIF) as a novel framework that leverages the Free Energy Principle (FEP) and random attractor dynamics for efficient and adaptive robotics. \nThe authors demonstrate their ideas through the use case of a photovoltaic (PV) panel inspection. The authors claim that \"NetAIF optimizes the intrinsic dynamics of neural networks and enables robots to quickly adapt to dynamic and complex real-world environments with minimal computational re- sources and without the need for extensive pre-training. Unlike traditional learning methods that rely on large datasets and prolonged training periods, NetAIF offers a more efficient alternative.\"\nThe authors also provide supplementary material which is a slide show of the paper but the experiments with the 6DOF robotic arm show performance against a few experiments in a lab setting. \n\nWhile the work uses a novel approach the paper lacks mathematical rigor and the paper relies very heavily on a few papers forcing the reader to read several papers before extracting value form this work. While citations are needed and not everything needs to be reintroduced the paper should have enough content to stand on its own. This makes it challenging to evaluate and reproduce the proposed NetAIF framework. Without any formal mathematical treatment, the claims remain at a high level, reducing the work’s overall scientific and practical value. Integrating a comprehensive mathematical model would significantly enhance the credibility and applicability of the approach. For example, the use of stochastic processes and attractors should be backed by a comprehensive set of equations that demonstrate how these elements interact dynamically within the network. Without this, it is unclear how the system transitions between states, adapts to sensory inputs, and minimizes free energy in a precise manner, additionally , intentionally introducing instability has risks and managing instability should be critical for convergence, the authors mention controlled instabilities and cite a paper but do not provide any formal treatment or plots showing what this looks like." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Clarity: The explanation of the technical mechanisms, such as the use of random attractors and the free energy landscape, can be complex and may not be accessible to readers unfamiliar with advanced robotics or neural dynamics. Visual aids, while present, could have been better integrated to explain these concepts more intuitively​\nReproducibility: The paper lacks detailed implementation specifics, such as parameter settings and hyperparameters used during the experiments, which limits the reproducibility of results. Clearer code snippets or references to an open-source implementation would enhance its utility for other researchers​.\nStrong Assumptions: The paper assumes that the Free Energy Principle (FEP) is applicable to all dynamic robotic systems without extensive empirical validation outside the PV panel inspection case. Additionally, the scalability of the model to other industries or tasks is presumed but not explicitly tested​.\nCitations: Citations lack depth and specificity, especially in sections where novel methods are introduced.\nAuthors often rely on generic references rather than recent, more relevant studies directly supporting the claims made in the paper.\nSome citations are self-referential, reducing the overall credibility. The paper would benefit from a more thorough literature review, inclusion of detailed empirical comparisons from other studies, and references to supplementary or reproducible materials that validate the methods described.\nMathematical rigor: While the work uses a novel approach the paper lacks mathematical rigor. This makes it challenging to evaluate and reproduce the proposed NetAIF framework. Without any formal mathematical treatment, the claims remain at a high level, reducing the work’s overall scientific and practical value." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. In Algorithm 1, what is the difference between Hidden_signals_prev and Hidden signals in lines 12-13?\n\n2. Is there some drawbacks (prices) to induce the random attractors?\n\n3. The function of feedback is still a bit confusing. If its implicit structure can be described in Fig 1?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The scheme of interrogating active inference and neural network.\nThe experimental demonstrations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "By integrating active inference and neural network learning schemes, this paper presents a novel control framework, aiming to adapt to an unknown dynamical environment, without pre-training and extensive computations. The focused topic is important considering the shortcomings of traditional deep reinforcement learning. Several numerical simulations and experiments are conducted to demonstrate the proposed control strategy. Although some initial valuable results have been provided in this paper, there are still many important improvements that need to be considered, including clear presentations, solid theorem, and extensive comparisons." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Experimental comparisons** are lacking in this paper. The authors claim the proposed method has robust and lightweight advantages, in comparison with DRL, while there are no compared results quantitatively. Maybe some of the results are mentioned in the companion paper, but this paper needs to be self-contended. Moreover, the proposed strategy can avoid the traditional planning module and some feedback scheme is integrated into the proposed strategy. It is curious to see how the proposed method compares with the traditional approach (planning + feedback control or MPC).\n\n2. Improve the presentation quality. The used symbols in this paper are unfamiliar and lack illustrations. The symbols employed in Fig 3 need to be illustrated, otherwise, the purpose of intuitional expression will be lost. \n\n3. The key idea of the Net AIF is to introduce the controlled instabilities by random attractors, while the whole system is within the safety region. More rigid convergence or stability analyses are needed.\n\n4. Experimental details need to be added. For example, the accurately measured states, noised states, and unknown states in Section 3.1 should be clearly provided.\n\n5. An ablation study is needed. It seems that the proposed framework consists of several parts. The functionality of each part needs to be quantified. For example, the efficiency of the replaced discrete weight-assigning mechanism is unknown. \n\n6. The limitations should be analyzed finally. It is highly recommended to provide the source code." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper introduces Network-based Active Inference (NetAIF), a game-changing framework that merges Active Inference with network dynamics, enabling adaptive real-time robotic control while dramatically reducing computational costs and time." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024networkbased,\ntitle={Network-based Active Inference for Adaptive and Cost-efficient Real-World Applications: {PV} Panel Inspection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v2NuTf6Kww},\nnote={under review}\n}" }, "abstract": { "value": "This paper introduces Network-based Active Inference (NetAIF), a novel framework that integrates random attractor dynamics and the Free Energy Principle (FEP) to improve trajectory generation and control in robotics. NetAIF optimizes the intrinsic dynamics of neural networks, enabling robots to quickly adapt to dynamic and complex real-world environments with minimal computational resources and without the need for extensive pre-training. Unlike traditional learning methods that rely on large datasets and prolonged training periods, NetAIF offers a more efficient alternative. \n\nIn real-world scenarios, such as Photovoltaic (PV) panel inspections, NetAIF demonstrates its ability to execute dynamic tasks with both high efficiency and robustness. The system excels in unpredictable environments while maintaining a low computational footprint. These capabilities make NetAIF a promising solution for industrial applications, offering cost-effective, adaptive robotic systems that can reduce operational expenses and enhance performance, particularly in sectors like energy, where adaptability and precision are crucial." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Active Inference (AIF)", "Free Energy Principle (FEP)", "Robotics", "Trajectory generation", "Random dynamical systems", "Random attractor dynamics", "Non-Equilibrium Steady State (NESS)", "Adaptive control", "Industrial automation", "Computational efficiency", "Cost-efficient solutions" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0cf50653eb955fc5e6acd39b5559912d783b4cd0.pdf" }, "presentation": null, "primary_area": { "value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/6d2ad4a3f3c4ebb719dfea5cd195d75ec469a1d5.zip" }, "title": { "value": "Network-based Active Inference for Adaptive and Cost-efficient Real-World Applications: PV Panel Inspection" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v2nEL42Pvb
SSGNN: Simple Yet Effective Spectral Graph Neural Network
main
Active
Spectral Graph Neural Networks;Graph Representation Learning
unsupervised, self-supervised, semi-supervised, and supervised representation learning
5;5;5;5
4;4;4;4
3;2;2;3
3;2;2;3
2;2;2;2
5
4
2.5
2.5
2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- How to compute the specturm? It appears to rely on time-consuming SVD operations.\n- The feasibility of applying this method to large-scale graphs is uncertain.\n- Runing time comparsion need to be added\n- Notable baseline methods are missing from the evaluation, particularly:\n - SGC (Simple Graph Convolution)\n - SSGC (Simple Spectral Graph Convolution)" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper presents an interesting transformation from scalar-to-scalar to set-to-set methodology, building upon the Spectral Former framework. By introducing a learnable parameter W to capture relationships between different frequency domain eigenvalues, the authors aim to enhance model performance through better consideration of inter-frequency domain relationships." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose SSGNN, a simple yet effective spectral-based Graph Neural Network that captures rich spectral information through an adaptive set-to-set filtering approach, offering a more efficient alternative to transformer-based methods. The method introduces a parameter-free Relative Gaussian Amplifier (ReGA) module for robust spectral filtering, and demonstrates superior or comparable performance to state-of-the-art models while using significantly fewer parameters (55x reduction) and computational resources (100x reduction in GFLOPs) across 20 real-world datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- However, the methodology lacks clarity regarding the eigenvalue computation process, which appears to rely on time-consuming SVD operations, raising concerns about computational efficiency.\n\n- Given that your proposed method emphasizes simplicity and reduced learnable parameters, it would be particularly valuable to demonstrate its effectiveness on large-scale graphs. While the reduction in parameter count is noteworthy, the real advantage of a simpler model should be its ability to scale effectively to larger, real-world graph applications. Therefore, I strongly recommend including comprehensive experiments on large-scale graph datasets to validate the method's practical utility. This would not only strengthen your contribution but also clearly differentiate your work from existing methods that may struggle with scalability.\n\n- From a comparative standpoint, although the spectral approach shows promise, the evaluation lacks comprehensive comparisons with important baseline methods, particularly SGC and SSGC. These baselines are especially relevant as they also prioritize simplicity and efficiency. To make your contribution more compelling, consider expanding the experimental section to include: (1) comparisons with these relevant baselines, (2) clear documentation of the eigenvalue computation process and its efficiency, and (3) thorough scalability analysis on large-scale graphs that would demonstrate the practical advantages of your simplified approach. This would help readers better understand the unique benefits of your method in real-world applications where scalability is crucial." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1、\tThe motivation of this paper is significant and valuable for Spectral GNNs.\n2、\tSSGNN achieves significant efficiency in terms of computation and parameters.\n3、\tExperimental results in most cases seem promising." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the improvements of spectral GNNs, which aims to design an expressive filter based on spectral graph theory for effective graph representations. \nThe paper points out that existing SOTA spectral GNNs bring more computational burden, though they can learn the filters better. Thus, the paper proposes a novel efficient framework, namely SSGNN, which only applies simple linear transformation instead of Transformers on the spectrum. Moreover, SSGNN incorporates a parameter-free Relative Gaussian Amplifier to the decoder to enhance adaptive filter learning and maintain stability. The paper conducts extensive experiments on synthetic and real-world datasets to demonstrate the effectiveness of SSGNN." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty of the method is somewhat limited, especially the direct application of existing eigen-correction, eigenvalue encoding, and convolution framework without any transfer challenge.\n2. The description of “set” of “set-to-set filtering” is ambiguous. Specformer applies Transformer on eigenvalue encodings, which enables filters to capture relative dependencies among the eigenvalues. Thus, the “set” of “set-to-set” in Sepcformer means the set of eigenvalues. However, SSGNN learns the spectral filter through linear transformations, so eigenvalues don’t interact with each other. Thus, what’s the meaning of “set”?\n3. The roles of the two linear transformations (namely W_{eig} and W_1W_h) respectively playing in encoder and decoder are not clear. In other words, why do authors include W_1 and W_h in the decoder instead of the encoder?\n4. The paper ignores some essential experiments.\n(1)As mentioned in Line 217, different heads allow the decoder to learn diverse spectral filtering patterns. However, there is no visualization of the diverse spectral filters learned by different heads to verify this conclusion.\n(2)There is no ablation study on the effectiveness of re-center adjustment in Equation 4 and the effectiveness of Relative Gaussian Amplifier in Equation 6.\n(3)Authors don’t verify the stability of SSGNN on OOD benchmarks such as DrugOOD[1], where many model stability studies are validated on this dataset.\n5. The symbols are ambiguous. For example, \\epsilon is used in Line 182, Line 240-241, and Line 307 simultaneously, making the paper more difficult to read. Moreover, it’s not clear on which \\epsilon the ablation experiments reported in Figure 4 are conducted.\n\n[1] Ji, Yuanfeng, et al. \"Drugood: Out-of-distribution dataset curator and benchmark for ai-aided drug discovery–a focus on affinity prediction problems with noise annotations.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 7. 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The design of the ReGA component in SSGNN has some novelty. It not only avoids negative eigenvalues but also improves the robustness of SSGNN.\n\n2. This paper conducts extensive experiments and SSGNN also shows competitive performance. For example, in the ZINC dataset, SSGNN has a RMSE value of 0.0592." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a Simple Yet Effective Spectral Graph Neural Network (SSGNN), which simplifies the set-to-set graph filter, e.g., Specformer, without performance degeneration. The key component is a parameter-free Relative Gaussian Amplifier (ReGA), which not only improves model performance but also maintains the robustness against graph perturbations. Extensive experiments on both node-level and graph-level datasets demonstrate the superiority of SSGNN in terms of effectiveness and efficiency over baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This paper claims that SSGNN uses a **simplified set-to-set approach** to capture key spectral features. However, there is no interaction between different eigenvalues in SSGNN. Specifically, the matrix $Z_{eig} \\in \\mathbb{R}^{N \\times (d+1)}$ indicates the $(d+1)$ dimensional representation of each eigenvalue. The transformation $W_{eig}$ is applied on the channels of a single eigenvalue, i.e., $Z_{eig}W_{eig}$. Therefore, SSGNN is not a set-to-set approach.\n\n2. SSGNN involves a lot of tricks in the training process, such as eigen-correction. However, both Specformer and Polynormer do not use eigen-correction for data preprocessing. In this case, it is necessary to make a comprehensive ablation study to validate the roles of each trick. We need to verify whether the performance improvement mainly comes from ReGA rather than eigen-correction.\n\n3. It would be better if the authors could provide a comparison of the time and space overhead between different methods.\n\n4. How many parameters does SSGNN have in Table 5? Generally, we will control the number of parameters around 50K in the ZINC dataset." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "In the abstract, “Our analysis indicates that applying Transformers to these filters provides minimal advantage in the spectral domain.” While such analysis seems to be missing in the content. Could you elaborate it more?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1-\tThe method is simple and effective on multiple graph downstream tasks.\n\n2-\tThe experiments are comprehensive and solid.\n\n3-\tThe theoretical analysis of ReGA is interesting and novel." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes SSGNN, a simple and effective GNN model which can achieve good performance with much reduced parameters compared to transformers. With basic spectral encoder-decoder structure, it further incorporates a REGA module to strengthen the representational capabilities and the robustness against spectral perturbation. Experiments demonstrate its effectiveness and superiority on model parameters." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1-\tThe time and space cost in pre-computation and training stage of SSGNN seems to be a bottleneck for large graphs. Though top-k techniques can be applied, it’ll lose spectral frequencies which is essential for down-streaming tasks. When comparing parameter amounts and GFLOPS, it’ll be fair to show the training and pre-computation cost for baselines, transformers and other GNNs.\n\n2-\tIt needs experiments to demonstrate the contribution of ReGa, which is the most novel part in SSGNN. Will the removal of it greatly influence the performance?\n\n3-\tPresentation can be improved. “. .” appears in line 373 and line 398 appears incomplete sentences “* means” what?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Transformer-free spectral GNNs for graph representation learning achieving SOTA performance with significantly fewer parameters and computations." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024ssgnn,\ntitle={{SSGNN}: Simple Yet Effective Spectral Graph Neural Network},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v2nEL42Pvb},\nnote={under review}\n}" }, "abstract": { "value": "Spectral GNNs leverage graph spectral properties to model graph representations but have been less explored due to their computational challenges, especially compared to the more flexible and scalable spatial GNNs, which have seen broader adoption. However, spatial methods cannot fully exploit the rich information in graph spectra. Current Spectral GNNs, relying on fixed-order polynomials, use scalar-to-scalar filters applied uniformly across eigenvalues, failing to capture key spectral shifts and signal propagation dynamics. Though set-to-set filters can capture spectral complexity, methods that employ them frequently rely on Transformers, which add considerable computational burden. Our analysis indicates that applying Transformers to these filters provides minimal advantage in the spectral domain. We demonstrate that effective spectral filtering can be achieved without the need for transformers, offering a more efficient and spectrum-aware alternative. To this end, we propose a $\\textit{Simple Yet Effective Spectral Graph Neural Network}$ (SSGNN), which leverages the graph spectrum to adaptively filter using a simplified set-to-set approach that captures key spectral features. Moreover, we introduce a novel, parameter-free $\\textit{Relative Gaussian Amplifier}$ (ReGA) module, which adaptively learns spectral filtering while maintaining robustness against structural perturbations, ensuring stability. Extensive experiments on 20 real-world graph datasets, spanning both node-level and graph-level tasks along with a synthetic graph dataset, show that SSGNN matches or surpasses the performance of state-of-the-art (SOTA) spectral-based GNNs and graph transformers while using significantly fewer parameters and GFLOPs. Specifically, SSGNN achieves performance comparable to the current SOTA Graph Transformer model, Polynormer, with an average 55x reduction in parameters and 100x reduction in GFLOPs across all datasets. Our code will be made public upon acceptance." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Spectral Graph Neural Networks", "Graph Representation Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/8f99fd24f552579b17bb2a3b3a6ffa5f79f5849e.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "SSGNN: Simple Yet Effective Spectral Graph Neural Network" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v2uPdQDwSz
Query Efficient Nonsmooth Stochastic Black-Box Bilevel Optimization with Bregman Distance
main
Active
zeroth-order gradient;bilevel optimization
optimization
3;3;5;5
4;4;4;3
1;2;3;3
2;2;3;2
2;2;2;3
4
3.75
2.25
2.25
2.25
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The paper could be divided into two parts: Gaussian smoothing and the Bregman distance method for the nonsmooth outer problem, which currently feel loosely connected. Could the authors further clarify why the nonsmooth problem necessitates the use of zeroth-order gradient descent?\n2. The convergence relies heavily on sufficiently large values of $B$ and $B'$. What happens if $B$ and $B'$ are of $\\mathcal{O}(1)$? This might provide a fairer comparison with existing methods.\n3. In lines 283 and 285, it appears that Gaussian smoothing is applied twice, converting the second-order gradient to first and then to zeroth order. This may introduce significant errors; could the authors elaborate on how they mitigate this error?\n\nI would be open to reconsidering my grade if all of my concerns are addressed." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper presents a query-efficient bilevel optimization method tailored for nonsmooth stochastic black-box problems, achieving competitive convergence with lower query complexity and outperforming existing methods in data hyper-cleaning and hyperrepresentation learning tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a solution for nonsmooth bilevel optimization using Bregman distance, with a focus on zeroth-order gradient approximation via Gaussian smoothing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The assumption of Lipschitz continuity in Assumption 2 appears inconsistent with strong convexity; further clarification is needed on why this assumption holds in the given setting." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.I notice that the property of in the Bregman distance used in the convergence analysis is only the strong-convexity. How does the selection of the Bregman distance function impact the performance of the algorithm? Is there a systematic approach to choosing the optimal function for a given problem? What kind of is used in the numerical experiment?\n\n2.The paper applies Gaussian smoothing approximation techniques to replace the gradient updates. Does the algorithm in this paper have any special structure that supports the feasibility of this technique? Can this technique be adopted by other gradient-based algorithms?\n\nTypo:\n\n1. $\\mathcal{G}_t=\\frac{1}{\\alpha} ( x^t-x^{t+1} )$ if you want to get (64), the coefficient of $\\mathcal{B}_{\\Psi_t} ( x, x_t )$ in (63) should be $\\frac{1}{\\alpha}$." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.The algorithm BreZOSBA proposed in this paper combines zeroth-order gradient estimations and Bregman distances to solve the challenging black-box bilevel problem. This idea is both novel and inspiring.\n\n2.BreZOSBA can converge to the stationary point within $\\mathcal{O}\\left(\\frac{d_1\\left(d_1+d_2\\right)^2}{\\epsilon^2}\\right)$ queries, which is outstanding for the zeroth-order gradient method.\n\n3.The experimental results show that the proposed method outperforms several baseline methods, demonstrating its effectiveness in practical applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a gradient-free algorithm, named BreZOSBA, for solving stochastic black-box bilevel optimization problems with nonsmooth upper-level and smooth lower level. To deal with the nonsmoothness and the black-box nature of the problem, the algorithm updates the upper and lower-level variables iteratively using stochastic zeroth-order gradient estimates and a Bregman distance-based proximal term. The theoretical analysis demonstrates the query efficiency of the algorithm, showing that it can achieve a certain accuracy level with a finite number of queries. \nThe paper also presents experimental results on four datasets (MNIST, FashionMNIST, Cifar10, and SVHN) to evaluate the performance of the proposed method. The results show that BreZOSBA outperforms several baseline methods in terms of accuracy and convergence speed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.The subproblem related to updates of $x$ may be difficult for some Bregman distance, especially when the structure of $h(x)$ is unknown or complex.\n\n2.The theoretical convergence analysis of this method relies on several strong assumptions like strong convexity of the lower-level, which may not be met in most practical applications. This can consequently affect the actual performance of the algorithm.\n\n3.Although Gaussian smoothing can approximate the gradient, the additional errors introduced at each iteration may accumulate. Although it can converge to a solution in expectation, it may still be far from the solution after multiple iterations due to a large variance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "n/a" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- Theoretical Convergence Guarantees: The convergence analysis looks comprehensive, providing non-asymptotic convergence results that are theoretically solid and highlight the advantage of BreZOSBA in terms of query complexity. \n\n- BreZOSBA’s use of a single-loop ZO framework with Bregman distance introduces computational savings compared to double-loop structures." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a query-efficient algorithm, BreZOSBA, designed to solve nonsmooth stochastic black-box bilevel optimization (BO) problems by leveraging Bregman distance and Gaussian smoothing. By adopting a single-loop, zeroth-order (ZO) framework, the authors claim improved query efficiency, theoretically achieving $ O(d_1(d_1 + d_2)^2 \\epsilon^{-2})$ query complexity to reach an $ \\epsilon$ -stationary point. The paper validates BreZOSBA on two small scale applications—data hyper-cleaning and hyper-representation learning—and compares it to two baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The practical applicability of bilevel optimization with both inner and outer levels as black-boxes was largely not motivated by the authors. While some specific bilevel problems can have partial black-box structure, many practical applications in machine learning (such as hyperparameter optimization, meta-learning, etc) involve inner-level variables that represent the parameters of deep neural networks. In such cases, using zeroth-order methods to solve the inner problem would be extremely slow and may not even converge. \n\n- The choice to employ ZO techniques to solve both the inner and outer levels restricts the scalability of the approach, which may explain why the experimental validation was limited to small-scale toy tasks. Thus, the method’s suitability for real-world BO problems remains questionable, especially in deep learning context. \n\n- The experiments are very limited with only two compared baselines (that are not particularly strong for these experiments). In fact, HOZOG [1] was introduced especially for hyperparameter optimization (where $x$ are the hyperparameters and $y$ is the parameters of possibly a deep neural network), in which it can be a strong baseline. However, the authors used it here in settings for which HOZOG is not the most adequate baseline, which I believe is unfair and undermines the validity of the results. \n\n- The literature review on bilevel optimization is sparse and does not adequately cover recent, relevant works. For instance, PZOBO [2], another hessian-free ZO method, is neither discussed nor compared against, despite its direct relevance. \nThe paper also lacks a discussion on extensions or limitations of the proposed approach in the broader context of BO, such as bilevel problems without lower-level singleton constraint. This would be essential to clarify where the proposed method fits within the current landscape of BO approaches and where it may fall short or require adaptation. \n\nOther minor issues: \n\n- The inner objective $g$ is smooth and this should be explicitly said after the problem definition, just like the authors did it for the outer objective function $f$. \n\n- In the text, the authors keep mentioning that using ZO gradient for the inner level problem is not the most efficient thing, but their algorithm did exactly use that (equation 14). \n\nReferences: \n\n[1] Bin Gu, Guodong Liu, Yanfu Zhang, Xiang Geng, and Heng Huang. Optimizing large-scale\nhyperparameters via automated learning algorithm. arXiv preprint arXiv:2102.09026, 2021. \n\n[2] On the convergence theory for hessian-free bilevel algorithms. D Sow, K Ji, Y Liang - Advances in Neural Information Processing Systems (NeurIPS), 2022." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "What is the benefit of using the Bregman distance in the proposed algorithm?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.The paper chooses a good perspective to study BLO methods, where there is still some gaps in the complexity theory.\n\n2.The paper improves the theoretical complexity bound to an almost optimal one." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper considers nonsmooth black box Bi-level optimization. The lower problem is assumed to be strongly convex and have Lipschitz continuous gradients and hessians, and the upper level function can be nonconvex. Under this setting, the paper improves the query complexity of zero order method for finding an approximately stationary point of BLO. Numerical experiments are conducted to show the competitive performance of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.The strong convexity assumption has been extensively studied for BLO. The authors should lay importance on the difference of the used techniques for establishing complexity bound for black box BLO.\n\n2.The stationarity measure $\\|G^t\\|$ lack explanation. It is important for the authors to justify that the used measure is equivalent to the compared methods." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a query efficient method for nonsmooth stochastic black-box bilevel optimization." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024query,\ntitle={Query Efficient Nonsmooth Stochastic Black-Box Bilevel Optimization with Bregman Distance},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v2uPdQDwSz},\nnote={under review}\n}" }, "abstract": { "value": "Bilevel optimization (BO) has recently gained significant attention in various machine learning applications due to its ability to model the hierarchical structures inherent in these problems. Several gradient-free methods have been proposed to address stochastic black-box bilevel optimization problems, where the gradients of both the upper and lower-level objective functions are unavailable. However, these methods suffer from high query complexity and do not accommodate more general bilevel problems involving nonsmooth regularization. In this paper, we present a query-efficient method that effectively leverages Bregman distance to solve nonsmooth stochastic black-box bilevel optimization problems. More importantly, we provide a non-asymptotic convergence analysis, showing that our method requires only $\\mathcal{O}({d_1(d_1+d_2)^2}{\\epsilon^{-2}})$ queries to reach the $\\epsilon$-stationary point. Additionally, we conduct experiments on data hyper-cleaning and hyper-representation learning tasks, demonstrating that our algorithms outperform existing bilevel optimization methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "zeroth-order gradient", "bilevel optimization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9b61c2968b66a5459900496a3c077fb9c6b49c9c.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/ea98fd95487c994f55cfdff499f80857424942b0.zip" }, "title": { "value": "Query Efficient Nonsmooth Stochastic Black-Box Bilevel Optimization with Bregman Distance" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v2zcCDYMok
PostCast: Generalizable Postprocessing for Precipitation Nowcasting via Unsupervised Blurriness Modeling
main
Active
AI for Science; Precipitation Nowcasting; Diffusion Model; Zero-shot Blurriness Kernel; Auto-scale Denoise Guidance
applications to physical sciences (physics, chemistry, biology, etc.)
3;3;6
4;3;3
2;2;3
2;2;3
2;2;3
4
3.333333
2.333333
2.333333
2.333333
-0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The questions here are related to the three points described as weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper proposes a novel approach to estimating blurriness by utilizing the gradient of metric respect to kernel parameters, which serves as a guide for the sampling of x_ {t-1}.\n2. The paper introduces an auto-scale gradient guidance strategy that automatically calculates the guidance scale corresponding to different forecast time periods, models, and datasets. This strategy enables the model to adaptively denoise the prediction results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed a universal post-processing method, PostCast, designed for precipitation nowcasting. This method achieves denoising of convolution based prediction models using unconditinal DDPM through two key innovative points: zero shot blur estimation mechanism and auto scale gradient guidance strategy. The model was trained on a combination of five different datasets and prediction results generated by different models, and outperforms other conditional denoising models in terms of CSI metrics." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. CasCast and DiffCast are both designed for **predicting precipitation**, but not **postprocessing tasks**. It is crucial for the authors to compare with other generative models such as GAN. This would provide a more comprehensive evaluation of PostCast's performance against a wider range of methodologies.\n2. The authors have compared PostCast with CasCast and DiffCast only in **out-of-distribution datasets** but did not compare them on HKO7, SEVIR, etc.\n3. Relying on CSI as the only metric may not sufficiently judge the quality of precipitation prediction. The average intensity of the precipitation predictions can significantly impact CSI. For instance, using simple histogram matching on predictions can also significantly improve CSI. The authors should clarify whether the CSI improvement is due to **increased intensity** from the model or other factors to provide a more accurate assessment of the model's performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. **Evaluation Metrics**: Could you add results for POD, FAR, and image quality metrics like PSNR and SSIM in the appendix to provide a more comprehensive evaluation of PostCast?\n\n2. **Related Work**: Could you expand the related work section to discuss GAN and Transformer methods in precipitation nowcasting to better contextualize the background and contributions of PostCast?\n\n3. **Comparison with GAN Methods**: Could you include comparisons with GAN-based methods in precipitation nowcasting to further validate PostCast's effectiveness?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. **Originality**: The primary innovation of PostCast lies in employing an unconditional diffusion model (DDPM) to remove blurriness from precipitation predictions without relying on paired data. By introducing zero-shot blur kernel estimation and an auto-scale denoise guidance strategy, PostCast adapts to various datasets, prediction lead times, and blur modes across different models. This unsupervised deblurring approach appears to be the first of its kind in the domain of precipitation nowcasting, demonstrating creative thinking.\n\n2. **Completeness of Experiments**: The paper provides a well-designed experimental setup, covering multiple datasets and forecast models to showcase the generalizability and robustness of PostCast. The experiments include diverse evaluation metrics and extensive visual results, effectively validating the method’s performance in extreme precipitation event prediction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel postprocessing approach for precipitation nowcasting, named PostCast. The core innovation of PostCast lies in utilizing an unconditional diffusion model (DDPM) to remove blurriness from precipitation predictions without requiring paired data of real observations and blurry predictions. The method introduces a zero-shot blur kernel estimation mechanism and an auto-scale denoise guidance strategy, enabling the model to adapt to various datasets, prediction lead times, and blur modes to generate sharper predictions. The authors conducted experiments on multiple precipitation datasets and forecast models, demonstrating PostCast’s effectiveness in improving prediction accuracy for extreme precipitation events. The paper proposes a highly generalizable and adaptive postprocessing framework that shows strong potential for broad applications in precipitation nowcasting tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Incomplete Evaluation Metrics**: The paper relies only on CSI, lacking POD, FAR, and similar precipitation metrics, as well as image quality metrics like PSNR and SSIM. Including these in the appendix would strengthen the evaluation’s comprehensiveness.\n\n2. **Insufficient Related Work Discussion**: The discussion of GAN methods in precipitation nowcasting is limited, and the introduction to Transformer-based approaches is also lacking. Expanding these sections would better contextualize PostCast’s contributions.\n\n3. **Lack of Comparison with GAN Methods**: There is no direct comparison with GAN-based approaches in precipitation nowcasting. Adding such comparisons would provide a more complete assessment of PostCast’s performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The current metric/score mainly evaluates whether the small-scale features are recovered and extreme events are captured well after deblurring. However, isn’t it also important to assess how closely the nowcasting outputs, after deblurring with this methodology, match the ground truth, generally? If we don’t threshold the pixel values in this case and compute an error metric such as MSE and PSNR, that could also be informative. The authors can consider adding these scores." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The problem is well-motivated. An unsupervised method alleviates the need for labels and the model is no longer limited to the blur modes seen during training. Given that this framework is independent of the spatiotemporal prediction model and can be added on top of any prediction model, it makes this work very useful and widely applicable. \n- The results strongly demonstrate the ability of their proposed method. The experiments are extensive and prove PostCast’s effectiveness in recovering weather patterns across multiple prediction models, datasets, and lead times." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes an unsupervised postprocessing method for precipitation nowcasting outputs to remove blurriness from these predictions, especially at longer lead times. This can be important to obtain accurate extreme precipitation events. The blurriness is eliminated with an unconditioned denoising diffusion probabilistic model guided by blurry predictions. The authors introduce a zero-shot kernel estimation method and an auto-scale denoise guidance strategy and show that their proposed framework generalizes to many blurriness modes across different datasets and varying lead times." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The \"method\" section could be more organized, creating subsections could help. Moreover, the “zero-shot kernel estimation mechanism” and “auto-scale denoise guidance strategy”, should be explained in more detail, as these are the main contributions this paper makes. \n- The authors should justify using \"CSI\" (Critical Success Index) scores as the only evaluation metric. Adding another metric such as “Recall” (True positive rate) can complement the current evaluation and make it more comprehensive." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024postcast,\ntitle={PostCast: Generalizable Postprocessing for Precipitation Nowcasting via Unsupervised Blurriness Modeling},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v2zcCDYMok},\nnote={under review}\n}" }, "abstract": { "value": "Precipitation nowcasting plays a pivotal role in socioeconomic sectors, especially in severe convective weather warnings. Although notable progress has been achieved by approaches mining the spatiotemporal correlations with deep learning, these methods still suffer severe blurriness as the lead time increases, which hampers accurate predictions for extreme precipitation. To alleviate blurriness, researchers explore generative methods conditioned on blurry predictions. However, the pairs of blurry predictions and corresponding ground truth need to be given in advance, making the training pipeline cumbersome and limiting the generality of generative models within blurry modes that appear in training data. By rethinking the blurriness in precipitation nowcasting as a blur kernel acting on predictions, we propose an unsupervised postprocessing method to eliminate the blurriness without the requirement of training with the pairs of blurry predictions and corresponding ground truth. Specifically, we utilize blurry predictions to guide the generation process of a pre-trained unconditional denoising diffusion probabilistic model (DDPM) to obtain high-fidelity predictions with eliminated blurriness. A zero-shot blur kernel estimation mechanism and an auto-scale denoise guidance strategy are introduced to adapt the unconditional DDPM to any blurriness modes varying from datasets and lead times in precipitation nowcasting. Extensive experiments are conducted on 7 precipitation radar datasets, demonstrating the generality and superiority of our method." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "AI for Science; Precipitation Nowcasting; Diffusion Model; Zero-shot Blurriness Kernel; Auto-scale Denoise Guidance" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4626fa65af0515195be96e1ad39b36336cfaa023.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "PostCast: Generalizable Postprocessing for Precipitation Nowcasting via Unsupervised Blurriness Modeling" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v3DwQlyGbv
Paramanu-Ganita: An Efficient Pre-trained Generative Mathematics Language Model with Chain-of-Thought Instruction Fine-Tuning
main
Active
reasoning;language models;pretraining;CoT fine-tuning;AI4Math
foundation or frontier models, including LLMs
1;3;3
5;5;4
1;2;3
1;1;2
1;2;2
2.333333
4.666667
2
1.333333
1.666667
-0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Some minor and major questions:\n1. Abstract: Concrete examples would be better, such as which model did it beat despite being smaller etc..\n2. L196: Please give examples, what happens for various ways of writing floats. How are the European and US/UK numbers treated 1,43 vs 1.43. Mixed numbers and digits, other mathematical symbols. To me, its not so clear from the writing.\n3. L210: The architecture description seems incomplete, given its a section. You have mentioned decoders elsewhere, but you should complete this here, mentioning how many layers of decoders (or ranges), how many dense layers, and some block diagrams, referred from the section. This is supposed to be the most important section.\n4. L249: Whats the perplexity of other models, especially mathematics specialist or science specialist ones? Can you show a table comparing them? Otherwise, the standalone numbers do not make sense to me." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-written. I appreciate the background and the description of how the model is trained. The idea of targeting mathematics is important and building LLMs specializing in math (at least some part of it) is important. \n\nThe dataset is an important contribution, however I am not sure whether the authors plan to make it public." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Authors present a LLM specializing in mathematics, called Paramanu-Ganita. It is quite smaller in size and exhibits interesting performance benefits in several math and logical datasets, compared some LLMs with bigger size. Authors also trained tokenizers from scratch, curate a new dataset for pretraining and show that Paramanu-Ganita outperforms several general-purpose LLMs and some domain-specialist LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I feel the paper explores an interesting direction, but there are some concerns:\n\n1. Firstly, GSM8k tests basic math word problem skills and given the model's GSM8k performance is pretty poor, I do not feel the model is ready yet. I think more experimentation is required. Also, how are Table 2 values computed? It seems the MetaMath paper reports GSM8K performance to be 82.3. Why is it 66.5 here? [1]\n\n2. What is mostly missing from the paper are proper motivations and justification as to what \"contributes\" or what is expected to contribute to the \"improved\" performance? \n\n - Looking at this from a different point of view, why did the authors not start with MetaMath, then say change the tokenizers or change the dataset? Then, slowly demonstrate how all the innovations are truly necessary. At the least such ablations would have showed the necessity of new models. \n\n - Secondly, given the model's performance is not so great, what are we gaining by spending so much training time and cost?\n\n3. One more important aspect is, what are the domains that the model targets? What are the grade levels? Is it the expectation that we will also do IMO problems starting from GSM8k? Or, are we targeting sub-disciplines algebra, pre-algebra, calculus etc.? I think this depth is also missing, so is related papers that investigate the need of such models [2].\n\n[1] https://openreview.net/forum?id=N8N0hgNDRt\n[2] MATHSENSEI: A Tool-Augmented Large Language Model for Mathematical Reasoning" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "See weaknesses" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- Good empirical results despite smaller model size, demonstrating the effectiveness of their approach\n- Demonstrates that smaller, more efficient models can achieve good mathematical reasoning performance" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces PARAMANU-GANITA, a 208 million parameter mathematics-focused language model trained from scratch. The authors demonstrate that effective mathematical reasoning capabilities can be achieved with smaller, more efficient models when trained specifically for the domain. This approach offers significant advantages in terms of computational costs and environmental impact while maintaining good performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The overall presentation of the paper still needs much improvement. The paper is not in ready-to-review or ready-to-submit status. The figures are pretty rough and unclear for what the authors want to express. For example, Figure 2 shows GPU Power Usage during pretraining of Paramanu-Ganita. But what conclusions do the authors want to make here? How does it illustrate the environment friendly nature of the model? For the figure 1, what does the blue line mean here?\n- Limited Ablation Studies. The paper doesn't analyze the relative importance of different components of their training data (web text vs. code vs. lecture notes). It is unclear why the authors want to utilize these data sources and why the data mixture should be adopted as it is in the paper.\n- Contamination issues. The model achieves good performance on GSM8K and MATH with 200M parameters. It is unclear whether there is data contamination issue." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Address the weaknesses of the paper mentioned above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. A novel decoder model, that is 34 times smaller than existing LLMs and can outperform them by a huge margin\n2. A detailed explanation of the training process required\n3. Detailed benchmarking on GSM8K, MATH and other datasets.\n4. Emphasis on the training time required and compared it to other existing LLMs, showing computation and environmental prowess in training an exclusive tiny model from scratch." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The author presents a small decoder-based language model on mathematics called Paramanu-ganita. They trained this model from scratch using the existing public mathematical corpus and also performed CoT instruction finetuning on top of it. They also train their own tokenizer specialised in math and code. Despite their model only having 208 million parameters, it outperforms general LLMs by approximately 30% points, and even math-specialised LLMs by 3-23% points in GSM8K, 6-8% on MATH. The 208 million parameter model outperformed LLaMa-1 (33B, 13B, 7B), LLaMa-2 (7B, 13B), Falcon (40B, 7B), PaLM (62B, 8B), MPT (30B, 7B), Vicuna 13B, and math-specialised LLMs like Minerva 8B, LLEMMA-7B on GSM8K, MATH, AGIEVAL-AQuA-RAT benchmarks. They also showed the reduced time and computation requirement to train this model as compared to existing LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper uses Qwen-72B to label the corpus and use a score >= 0.6 for training the model, ensuring only a high-quality dataset is used. However, apart from this, the training process used is not novel. Specifically, there is no novelty in the model architecture or training paradigm used that can justify the complete novelty of the paper and also puts into question the improved performance of a 208 million parameter model over LLMs\n2. The paper does not touch upon, newer and difficult mathematical datasets such as MATHBENCH or JEEBENCH. These are some datasets that were released after training cutoff time for some models, ensuring they are not part of their training data. These datasets are also much more difficult as compared to gms8k. This will ensure that the proposed model is robust in solving difficult problems that it hasn't seen before.\n3. Will the checkpoint-filtered corpus used for training be publicly available?\n4. How does the model perform on out-of-distribution data points, this can be checked by first doing a sanity check of data memorization/contamination [1]. Performing simple algorithms 1 and 2 from the paper will ensure that the model has not seen the evaluation dataset, making the results more robust.\n5. The empirical analysis is missing from the paper. A thorough qualitative comparison of reasoning chains produced by Paramanu-Ganita versus other models on a few representative problems from the benchmark datasets. For example, what errors are made by existing LLMs vs. Paramanu-ganita and in which area does it improve?\n\nReference\n[1] Golchin, Shahriar, and Mihai Surdeanu. \"Time travel in llms: Tracing data contamination in large language models.\" arXiv preprint arXiv:2308.08493 (2023)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "An efficient language model for mathematics, which is pretrained from scratch on a custom corpus on 31.5 billion tokens; despite having only 208 million parameters, it outperforms several large and very large LLMs on standard benchmarks" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024paramanuganita,\ntitle={Paramanu-Ganita: An Efficient Pre-trained Generative Mathematics Language Model with Chain-of-Thought Instruction Fine-Tuning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v3DwQlyGbv},\nnote={under review}\n}" }, "abstract": { "value": "In this paper, we present PARAMANU -G ANITA, a 208 million-parameter novel\nAuto Regressive (AR) decoder based language model on mathematics. We per-\nformed pretraining from scratch on 31.5 billion tokens using a context size of 4096\non a mixed mathematical corpus consisting of mathematical web pages, mathe-\nmatics related source code such as AlgebraStack, mathematical textbooks, Chain-\nof-Thought (CoT) templatised mathematical StackOverflow question answers\npairs, and mathematical lecture notes in LATEX curated by us. We also trained a\nmath and code specialised BPE tokenizer. We proposed and performed Chain-of-\nThought instruction fine-tuning of Paramanu-Ganita on the MetaMathQA dataset.\nWe evaluate our model on GSM8K and MATH mathematical benchmarks, and on\nlogical deductive reasoning (LogiQA) and multiple choice high school and col-\nlege level math questions from SAT (AGIEVAL-SAT-Math), GRE/GMAT ques-\ntions (AGIEVAL-AQuA-RAT), college and high school level math questions from\nMMLU. Our model Paramanu-Ganita, despite being 34 times smaller than the\n7B LLMs, outperforms general LLMs by approximately 30% points, and even\nmath-specialised LLMs by 3-23% points in GSM8K test accuracy metric. On\nMATH benchmark, Paramanu-Ganita outperformed the various models by 6-8%\npoints. On other benchmarks such as LogiQA logical deductive reasoning bench-\nmark, mathematical high school level multi-choice questions (MMLU-math-high-\nschool), GRE-GMAT level quantitative questions (AGIEVAL-AQuA-RAT), SAT\nlevel math questions, Paramanu-Ganita was better than the others by about 1-4%\npoints. The large significant margin improvement in performance of our math\nmodel over the existing LLMs signifies that reasoning capabilities of language\nmodels are just not restricted to those with humongous number of parameters.\nParamanu-Ganita took only 170 hours of A100 training whereas large LLMs such\nas the math-specialised LLM, LLEMMA 7B, was trained for 23,000 A100 equiv-\nalent hours. Thus, our approach of pretraining powerful domain-specialised lan-\nguage models from scratch for domain adaptation is much more cost-effective and\nenvironmental friendly than performing continual training of LLMs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "reasoning", "language models", "pretraining", "CoT fine-tuning", "AI4Math" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/28346be59c5d9b389bc43395e3522fd1ca72b143.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Paramanu-Ganita: An Efficient Pre-trained Generative Mathematics Language Model with Chain-of-Thought Instruction Fine-Tuning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v3W9tdTGx5
Improving Group Connectivity for Generalization of Federated Deep Learning
main
Active
Deep learning;federated learning;generalization
transfer learning, meta learning, and lifelong learning
3;3;6
3;4;3
2;2;4
2;2;3
2;3;3
4
3.333333
2.666667
2.333333
2.666667
-0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The definition of the connectivity loss in Eq. (7) requires an integration over $\\alpha$ which I believe is not possible to implement practically. The authors should comment on how they approximate this loss in practice.\n\n2. What is the neural network used in Figure 2?\n\n3. Can we extend the definition of group connectivity to work with general weights instead of fixing the weights as $1/K$ for each of the group models?\n\n4. For overparameterized models (every global optimum is also a local optimum), it appears that the value of $\\Gamma$ in Definition 3.7 would be zero, implying no effect of heterogeneity on the bound in Theorem 3.8. Can the authors comment on this?\n\n**Typos**\n1. There appears to be a typo in Line 95 in the statement \"refer to each client's distribution $D_i$\".\n2. It should be $D^t$ and not $D$ in Line 113." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-structured and easy to read and intuitively the idea of using anchor models to improve LMC cross the client local models makes sense.\n\n2. Experimental results look promising and show that the proposed FedGuCci and FedGuCci+ can outperform vanilla FedAvg and other baselines across a wide range of settings.\n\n3. Ablation studies are provided showing that the proposed algorithms are generally robust and require lower computation cost to reach a target accuracy compared to FedAvg and other baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed FedGuCci, an algorithm to improve merging of local client models in federated learning by ensuring that the client models are in the same loss basin, motivated by findings in the linear mode connectivity (LMC) literature. To do so, authors propose to add a connectivity loss to the standard client objective which tries to ensure that the client's local model is mode connected to an anchor model. The authors theoretically motivate their approach by proving the transitivity property of LMC when merging two layer neural networks. This is followed by experiments on practical FL training tasks, which shows that the proposed FedGuCci and it's extension FedGuCci+ can outperform FedAvg and other baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Lemma 3.3 statement and implication** \n\n* Firstly, I don't understand what is random about $D_\\\\epsilon(w_{anc}^*)$ defined in Lemma 3.3. If $w_{anc}^*$ is a deterministic anchor model (as defined later in Theorem 3.5 and Theorem 3.8) then $D_\\\\epsilon(w_{anc}^*)$ is also deterministic so the bound on the probability of $D_\\\\epsilon(w_{anc}^*)$ does not make sense to me. Intuitively, I believe Lemma 3.3 is trying to bound the probability of LMC between a deterministic anchor model $w_{anc}^*$ and a random model $w$ such that $||w-w_{anc}^*|| \\leq d/2$. So the probability here is over some uniform distribution of $w$ and not $w_{anc}^*$. If so, Lemma 3.3 should be re-written to clarify this.\n\n* There is no understanding of how large $d_{\\epsilon}$ can be. For instance if $d_{\\epsilon} \\geq d$ then the probability bound is just vacuous. Therefore authors need to either assume with some justification or prove that $d_\\epsilon < d$ for the bound to make sense.\n\n2. **Theorem 3.5 and 3.8 statement and implications.**\n\n* It is not clear to me why we need randomness in these theorem statements. Suppose we are given three models $w_{anc}, w_1, w_2$ such that i) $w_{anc}$ and $w_1$ are LMC ii) $w_{anc}$ and $w_2$ are LMC and iii)$||w_{anc} - w_1|| \\leq d$ and $||w_{anc} - w_2|| \\leq d$. Why can't we use just these assumptions to bound the loss barrier between $w_1$ and $w_2$? In other words why do we need the additional assumption that $w_1$ and $w_2$ are sampled from the uniform distribution?\n\n* Following up on my point in Weakness 1), authors need to show that $d_\\epsilon$ is bounded for the bound in Eq. (6) and Eq. (10) to not be vacuous. In addition, authors also need to show that $\\delta > 0$ that this there is a non-zero probability that $w_1$ and $w_2$ are sampled from the uniform distribution and are also LMC with $w_{anc}$. \n\n* Currently I don't see a dependence on $d$ in the bounds in Eq. (6) and Eq. (10) which is a bit surprising to me since $d$ is defined in the Theorem statement. \n\n3. **Inconsistent experimental results and performance of baselines**\n\n* It appears to me that the authors have either not implemented FedProx correctly or have not tuned the regularization parameter $\\mu$ in FedProx correctly. If we set $\\mu \\rightarrow 0$ then FedProx just becomes same as FedAvg so the performance of FedProx should at least be as good as FedAvg. Therefore it is surprising to see the consistently poor performance of FedProx across all the experiments.\n\n* In Table 2, for the Tiny ImageNet column with Non-IID hyper. being 100, the performance of SCAFFOLD seems inconsistent with previous results. \n\n* In Table 3, for the STS-B column, the performance of FedDyn seems inconsistent with previous results.\n\n* Authors should provided information on how they tuned hyperparameters for the experiments and also provide graphs showing test accuracy vs number of rounds for all the baselines. Currently only the final accuracy numbers is reported for each of the experiments \n\n* Why is FedDyn and SCAFFOLD not compared against in Table 4 and Table 5?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Q1: In Section 2.3, the way your method leverages the global model is different from FedProx, but it’s unclear what specific advantages your approach offers. While your method integrates the connectivity loss and replays N historical global models, FedProx only uses the current global model for regularization. A more detailed explanation of why your method is superior would be helpful. \n\nQ2: The proposed algorithms do not have theoretical convergence guarantees. Many of the methods you compare against do provide convergence analyses, and including such an analysis would be essential for the theoretical foundation of your work.\n\nQ3: When discussing group connectivity varying K, the range of client numbers is too small to reflect the truth in practice. The left plot in Figure 4 may not support the author’s conclusion that “the increase of barriers may converge to a point lower than vanilla training”. If it finds the transitivity of group connectivity may be weakened for larger K, an interpretation of the reason should be provided.\n\nQ4. If FedGuCci+ is designed to enhance FedGuCci by incorporating additional techniques, it’s important to evaluate the individual contribution of each technique to the overall improvement, like conducting ablation studies." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper adapts a connectivity perspective to use LMC improving the connectivity among the local models. It shall be a novel tool in the analysis of FL." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "FedGuCci(+) enhances federated learning generalization by improving group connectivity, inspired by linear mode connectivity, to better fuse local models and achieve stronger performance across diverse tasks and architectures." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1. The comparison with prior work is limited\n\nW2. The proposed algorithms, including FedGuCci, do not have theoretical convergence guarantees.\n\nW3. The comparisons and conclusions may not have sufficient evidence. \n\nW4. The evaluation needs to be enhanced." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tCan authors provide insights into storage challenges in very large-scale scenarios? Furthermore, I would be happy if I could see 1-2 experiments on large clients (M > 500) or (M > 1000) if possible.\n2.\tTheoretically, can authors intuitively explain how the model behaves and challenges when applied to models like GPT-3? Is there a way to expand this for such large models in the near future?\n3.\tCould the authors elaborate more on how the method would behave in low-bandwidth, high-latency federated learning environments? Would there be trade-offs in performance?\n4. Please respond to the weaknesses above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1)\tThe paper provides a creative adaptation of LMC to FL, which is an underutilized idea in the FL domain.\n2)\tWell written paper and easy to follow.\n3)\tSound theoretical foundations with supporting proofs to enhance the credibility." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents an interesting concept in FL often underexplored. Specifically, this paper leverages the concept of group connectivity, drawn from linear mode connectivity (LMC), to better fuse local models in parameter regions into a global generalized model, which could come with many benefits. The issue of model drift and heterogeneity seems to be tackled well. The authors validate the methods on vision and NLP datasets. Overall, the idea looks interesting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I am particularly concerned with communication costs and computational overhead of the method when applied to large-scale FL scenarios. Below are few weaknesses:\n\n1) Regarding the communication costs, I agree that there are no additional costs compared to FedAvg, but this remains an issue. There have been many advancements in communication efficiency over FedAvg in recent years. To mention a few recent works, DisPFL[1] and SSFL[2] have demonstrated better performance than FedAvg, even at very high sparsity levels. I believe the baselines are insufficient and suggest comparing the method with at least these sparse baselines on a few vision datasets to evaluate whether the performance gains justify the communication cost to validate its worth.\n\n2) The authors did not consider incorporating techniques to prune or compress unimportant weights, which could reduce communication overhead without sacrificing performance. While the core idea of the paper is appreciated, there is room for optimization and further improvements in terms of communication efficiency.\n\n2) Although the idea is novel, it’s a lot of computations and could pose practical challenge in large-scale FL applications. Authors should think of addressing this in the future works so that this method can be practically applied.\n\nReferences:\n\n[1] https://doi.org/10.48550/arXiv.2206.00187\n\n[2] https://doi.org/10.48550/arXiv.2405.09037" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@misc{\nli2024improving,\ntitle={Improving Group Connectivity for Generalization of Federated Deep Learning},\nauthor={Zexi Li and Jie Lin and Zhiqi Li and Didi Zhu and Tao Shen and Tao Lin and Chao Wu},\nyear={2024},\nurl={https://openreview.net/forum?id=v3W9tdTGx5}\n}" }, "abstract": { "value": "Federated learning (FL) involves multiple heterogeneous clients collaboratively training a global model via iterative local updates and model fusion. The generalization of FL's global model has a large gap compared with centralized training, which is its bottleneck for broader applications. In this paper, we study and improve FL's generalization through a fundamental \"connectivity'' perspective, which means how the local models are connected in the parameter region and fused into a generalized global model. The term \"connectivity'' is derived from linear mode connectivity (LMC), studying the interpolated loss landscape of two different solutions (e.g., modes) of neural networks. Bridging the gap between LMC and FL, in this paper, we leverage fixed anchor models to empirically and theoretically study the transitivity property of connectivity from two models (LMC) to a group of models (model fusion in FL). Based on the findings, we propose FedGuCci(+), improving group connectivity for better generalization. It is shown that our methods can boost the generalization of FL under client heterogeneity across various tasks (4 CV datasets and 6 NLP datasets) and model architectures (e.g., ViTs and PLMs)." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Zexi_Li1", "~Jie_Lin7", "~Zhiqi_Li4", "~Didi_Zhu1", "~Tao_Shen4", "~Tao_Lin1", "~Chao_Wu1" ] }, "authors": { "value": [ "Zexi Li", "Jie Lin", "Zhiqi Li", "Didi Zhu", "Tao Shen", "Tao Lin", "Chao Wu" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Deep learning", "federated learning", "generalization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "li|improving_group_connectivity_for_generalization_of_federated_deep_learning" }, "pdf": { "value": "/pdf/e1e45b4f520c317aa6d6dc06e9187fd3efe19db9.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/33a2b677607abdfeed4fb37ef180a282d6b3e487.zip" }, "title": { "value": "Improving Group Connectivity for Generalization of Federated Deep Learning" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v3XabZsB7j
CNN Variational autoencoders' reconstruction ability of long ECG signals
main
Active
VAE;CNN;electrocardiogram;reconstruction;compression;interpretability
interpretability and explainable AI
1;1;3;3
5;4;4;4
2;1;2;2
2;1;2;1
2;1;2;2
2
4.25
1.75
1.5
1.75
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The paper's application area is an important problem of long-sequence reconstruction, which could enable the capture of essential clinical information in the latent space." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper aims to build a VAE-based model for long ECG segments and considers a benchmark ECG dataset for the analysis." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper's presentation is poor and does not meet the standards of ICLR or other relevant AI/ML venues. The authors are advised to proofread the paper carefully, as there are numerous grammatical errors, including missing commas, throughout the text, including in the abstract. Many of these errors could be easily fixed by using available grammar checkers, suggesting that the manuscript may not yet be ready for submission.\n2. The proposed method lacks novelty, as it primarily focuses on splitting signals in the input space.\n3. The results are not well-presented, making it unclear what the main contributions of this work are." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "# Suggestions\n\n- Lack of Quantitative Metrics for Reconstruction Quality: --> Idea: Including quantitative reconstruction metrics (e.g., Mean Squared Error, Structural Similarity Index) would make it easier to assess the improvement over standard VAE architectures objectively.\n- Suboptimal Sleep Stage Classification Results: --> Idea: Exploring methods to enhance inter-segment information sharing could improve performance. Incorporating recurrent layers (RNN, LSTM) in the Parameterizer module could capture temporal dependencies between ECG segments, potentially boosting classification accuracy.\n- Over-Reliance on Folding Technique Without Comparison to Alternative Approaches: --> Idea: Including a baseline comparison with alternative architectures for long-sequence encoding would provide a clearer picture of the folding method’s relative effectiveness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Comprehensive Experimentation: The paper includes experiments with multiple datasets (MESA and MIT-BIH), demonstrating the generalizability of the proposed architecture across different signal sources and classification contexts.\n- Practical Applications: The application of this model to sleep stage classification is valuable and opens doors for future research on ECG-based sleep monitoring systems, which has relevance in healthcare." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a folded VAE architecture designed to improve the reconstruction quality of long ECG segments, aiming to retain interpretable features for downstream tasks like sleep stage classification. The authors present a CNN-based VAE architecture that encodes ECG signals by folding long segments into smaller sub-segments and processing them through a shared backbone encoder and decoder. The manuscript explores this architecture's reconstruction quality, highlights the challenges of using VAEs for long physiological signals, and evaluates the model's performance on two ECG datasets. Additionally, the folded-VAE framework's latent space representation is leveraged for sleep stage classification." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Lack of Quantitative Metrics for Reconstruction Quality: The reconstruction results rely on visual analysis without quantitative metrics, which may lead to subjective conclusions.\n- Suboptimal Sleep Stage Classification Results: The model’s classification accuracy (mean of 65% across subjects) is lower than other baseline methods.\n- Over-Reliance on Folding Technique Without Comparison to Alternative Approaches: While folding offers improved reconstruction, the authors do not compare this method against alternative architectures (e.g., hierarchical or multi-scale VAEs, or LSTM-based methods) that could also encode long signals effectively. \n- Clarity and Language: Some sections, particularly in the methodology, could benefit from improved clarity. For instance, equations used to describe the encoding and decoding of ECG segments could be elaborated on to better illustrate the folding approach." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. To more clearly demonstrate the advantages of the presented method, please explicitly compare the folding approach to a sliding window baseline, both conceptually and empirically. \n\n2. Please clarify the questions raised in bullet 2 in the weakness section. Furthermore, please provide a step-by-step explanation of how the folding and merging operations are implemented, including the mathematical justification for each step.\n\n3. Please provide a diagram or explicit description of how the VAE from section 2.4 is integrated into the overall architecture described in section 2.8. This would help clarify the relationship between these components." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The problem of reconstructing long ECG sequence is interesting.\n\n2. The experiments were conducted on 2 datasets and consider not only reconstruction but also classification tasks for performance evaluation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a VAE for reconstructing long (10-30s) ECG sequences. This is achieved with a so-called folding scheme to process short sequences during the encoding and then combine the folds before a decoder mirroring a similar architecture. Experiments are conducted on sleep ECG from two different datasets, and the benefit of the approach is investigated in both reconstruction and its effect on downstream sleep classification." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The premise of the study can be made more clear. Currently It is not clear what is the benefit of the presented method against simply using a sliding window on a long ECG sequence by reconstructing short sequences across the windows\n\n2. Some of the technical components can be better described. For instance, it is not clear what the summation means in equations (1-2) — does it really mean summation by averaging over the representation obtained from different folds? If yes, why? Similarly, it was not well justified why we can merge 30 8x4 features into 8x120 future map.\n\n3. The relation between the VAE described in 2.4 and the specific model described in 2.8 is not clear.\n\n4. There were baselines or comparative studies conducted evaluating the presented method with existing approaches to handle this, especially by simply learning to reconstruct for short sequences and apply to long sequences with a sliding window." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "see above" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The question of whether variational auto-encoders can generate a flexible continuous latent space for long electrocardiogram (ECG) segments and reconstruct the input is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work explores the use of a folded Variational Autoencoder (VAE) architecture to reconstruct long electrocardiogram (ECG) signals. A folded VAE architecture is proposed to address the limitations of traditional VAEs in handling long ECG sequences. The proposed method involves splitting long ECG segments into smaller folds, processing them sequentially, and concatenating them for reconstruction, within a VAE framework. The authors evaluate the proposed method's performance in a sleep stage classification task using two datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-\tRelated work section is missing completely. Can the authors comment on what work is out there that investigated similar problems as they do? How do for example the following works relate to the work proposed here (just to name a few):\n1. Comparison of Autoencoder Encodings for ECG Representation in Downstream Prediction Tasks. Christopher J. Harvey, Sumaiya Shomaji, Zijun Yao, Member, IEEE, Amit Noheria , 2024\n2. Multi-Domain Variational Autoencoders for Combined Modeling of MRI-Based Biventricular Anatomy and ECG-Based Cardiac Electrophysiology , Marcel Beetz, Abhirup Banerjee and Vicente Grau, Frontiers in Physiology, 2022\n3. Joint optimization of a β-VAE for ECG task-specific feature extraction. Viktor van der Valk, Douwe Atsma, Roderick Scherptong, and Marius Staring, arXiv 2023\n4. Feasibility of ECG Reconstruction From Minimal Lead Sets Using Convolutional Neural Networks, Maksymilian Matyschik; Henry Mauranen; Pietro Bonizzi; Joël Karel, IEEE 2020 Computing in Cardiology\n\n\nAs no related work is mentioned, also no baselines or comparisons are performed. I think the simplest comparison would be to take the vanilla VAE and take short sequences, and then concatenate. How does this compare to the proposed folded VAE?\n\nFurther the presentation of results is very poor. There are long vectors of numbers put into the text. Please create e.g. tables that show what method you use, what the result is for different architectures or modifications, and display this in a structured way." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A manifold encoding decoding architecture of CNN VAE for long ECG signals which facilitates interpretable model design." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024cnn,\ntitle={{CNN} Variational autoencoders' reconstruction ability of long {ECG} signals},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v3XabZsB7j},\nnote={under review}\n}" }, "abstract": { "value": "Can variational auto-encoders (VAEs) generate flexible continuous latent space for long electrocardiogram (ECG) segments and reconstruct the input? A folded VAE architecture is introduced in this study which is able to encode long ECG segments by splitting an input segment into folds and process them in sequence using a narrow field-of-view in the encoder and concatenate them at the end, instead of processing the long segment at a time. The VAE decoder follows similar folding and concatenation strategy for reconstruction of the original ECG segments. The proposed folded VAE architecture is able to generate better reconstruction of long 30-second ECG segments compared to unfolded classical VAE approach which often produce trivial reconstruction of long ECG segments. Experimental results show that the latent representation generated by our folded VAE architecture not only retains rich compressed information but also aids designing interpretable models by providing decision-making insights." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "VAE", "CNN", "electrocardiogram", "reconstruction", "compression", "interpretability" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9d57ab156227dd7f49f132e5d6010291d8569d02.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "CNN Variational autoencoders' reconstruction ability of long ECG signals" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v44CUwEeDY
Proper Orthogonal Decomposition for Scalable Training of Graph Neural Networks
main
Active
Graph Neural Networks;Scalability;Proper Orthogonal Decomposition;Sublinear Complexity
learning on graphs and other geometries & topologies
3;3;3;5
5;5;4;3
3;2;2;3
1;1;2;2
2;2;2;1
3.5
4.25
2.5
1.5
1.75
-0.870388
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "My major concerns are related to the clarity and the performance." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Enhancing GNN efficiency is an important topic.\n\n2. Theoretical justification is provided to demonstrate how POD can preserve graph information.\n\n3. Using POD to improve GNN efficiency is a novel approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to develop a scalable training approach for GNNs based on the proper orthogonal decomposition (POD) technique. It proposes the PGNN approach, which includes a preprocessing stage to sketch the input node feature matrix, sketch the convolution matrix, and generate count-sketch matrices to obtain the sketches. Then by using the sketches in the training stage, it greatly reduces the complexity as the dimensionality is reduced. The paper provides theoretical justification as well as the empirical study results to justify the power of PGNN." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Clarity - The paper is not very easy to follow. \n- The methodology section is hard to understand, bringing the pseudocode from Appendix to the main paper may help with this. \n- The experiment section applies the proposed PGNN approach to various GNN backbone (SGC, GCN, SAGE), each compares to different baselines and on different benchmark datasets, which looks a bit confusing to me.\n- The conclusion of each experiment is hard to find in the paper.\n\n2. The performance improvements looks marginal." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weakness" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper is easy to understand.\n\n2. The idea is novel." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a sketch-based GNN training method that does not require updating the sketches during training. PGNN uses POD sketches to approximate the update rules of GNNs. The effectiveness of the PGNN method is validated by experimental results on five datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. As a work on GNN training, the current amount of experimentation is far from sufficient. The authors only presented very few experimental results, which greatly undermines the solidity of this paper. The results from Table 2 to Table 6 are scattered, missing many important results, such as the results of GCN and GAT on Products.\n2. There seems to be a problem with the format of this paper. I am unable to select any text.\n3. The author mentions in Section 3 that the proof can be found in the appendix A, but I was unable to find the appendix A." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* I would expect the use of billion scale datasets instead of datasets such as Cora and Citeseer, which are not reliable datasets to compare model quality. Do the authors have any results for such datasets?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The proposed method makes GNN training a little bit more efficient.\n* The theorems make the proposed method theoretically sound." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes approximating the feature and the adjacency matrices into a lower dimensional subspace to increase computational efficiency of model training. Theorems about the quality of the approximations are proved. Experiments show how effective the proposed approach is in terms of trained model quality." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The speedup one gets by sacrificing model quality is not great.\n* For a proposed method claiming to improve efficiency, the datasets used are small. ogbn-papers100M, ogbn-mag240M, igb-het, igb-hom datasets would be more appropriate to prove the real worth of the method.\n* Figure 3 y-axis has no reference numbers.\n* Experiments against GraphBolt's [1] fully GPU-enabled GraphSAGE implementation [2] should be made if the authors want to compare against one of the best available GraphSAGE implementations, when it comes to runtime efficiency.\n* The experimental evaluation is focused against the SGC baseline, reducing the impact of the work. When the method is compared against nonlinear baselines, it does not fare well (Table 6).\n\n\n[1] https://www.dgl.ai/dgl_docs/api/python/dgl.graphbolt.html\n\n[2] https://github.com/dmlc/dgl/blob/master/examples/graphbolt/pyg/node_classification_advanced.py" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "-The method is able to achieve a high-degree of compression on ogbn-arxiv and reddit with comparable accuracy with compared against sketch-gnn which is using less-aggressive compression. \n\n-The POD ostensibly has not been considered in this type of training regime before. \n\n-Wide set of experiments -- citeseer, cora, ogbn-arxiv, reddit are all classical GNN datasets along with comparisons against sketch-gnn, Graph-SAINT, VQ-GNN." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose an algorithm to shrink the memory and computational complexity of forward passes in a message-passing based setting for GNNs. The authors rely on a proper orthogonal decomposition (POD) to achieve compression, mixed with some classical sketching techniques such as locality-sensitive hashing and count sketches. The method is able to achieve high compression over some graph datasets while remaining competitive against the uncompressed setting and baselines such as Sketch-GNN. Theory is provided on the optimality of the POD along with error bounds on the approximation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-Novelty is limited: the approach is simply using a low-rank projector and still relies on count-sketches per the update rule (5). The sketched update rules look remarkably similar to those of Sketch-GNN minus the POD which is taking care of the non-linearity (more on this below).\n\n-Sketch-GNN uses polynomial tensor sketches to avoid blow-up in the backwards pass when examining the derivative with respect to the activation function. The authors of this work are using LSH (per SLIDE) to avoid this. A few issues with this: (1) Is this really less complicated or computationally-less efficient than using learnable sketches? LSH (the SimHash) relies on dense Gaussian matrices and even SLIDE acknowledges learnable projections must be used. (2) The usage is not appropriate: observe that the LSH in SLIDE is used in the *final* layer as the magnitude of the dot product directly corresponds to the magnitude of the logit, i.e., class probability. When used in intermediate layers, all you are doing is ignoring smaller activations -- but these can be very important in the update, which is why the SLIDE strategy is nearly exclusively used for the final layer of massive, extreme multi-label compression tasks. (3) Sneaking in LSH for optimized forward and backwards passes is a non-trivial engineering task. The audience would appreciate seeing computational run-times associated with this overhead. \n\n-The theory is weak. In Theorem 1, the authors should clarify what they mean by \"optimal projection matrix\" as you have to look in the Appendix to gather it. The result is a close cousin of the Eckhart-Young-Mirsky theorem and the result is a few obvious inequality simplifications followed by a citation. The appendix recycles several lemmas from Ding et al., 2022 and the error bound, again, follows routine calculations from sketching theory. \n\n-The experimental results are weak. Table 3 shows the results are within-error equivalent to Sketch-GNN thus showing non-trivial improvement. In Table 4, the accuracies mostly lag or minimally improve accuracy. The authors should increase the sketch-ratio until the accuracy exceeds their competitors so the audience can understand the performance curves better. Table 6 has the same issue -- just increase the ratio until the PGNN outcompetes Sketch-GNN so we can understand at which ratio this will occur.\n\nMinor: Please fix the citations. They read as normal text -- parenthesize them." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce PGNN, a novel sketch-based method utilizing Proper Orthogonal Decomposition to train Graph Neural Networks efficiently, achieving sublinear training time and memory usage relative to graph size." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024proper,\ntitle={Proper Orthogonal Decomposition for Scalable Training of Graph Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v44CUwEeDY},\nnote={under review}\n}" }, "abstract": { "value": "As large-scale graphs become ubiquitous in real-world applications, there is growing concern about \nthe memory and time requirement to train a graph neural network (GNN) model for such datasets.\nStoring the entire adjacency and node embedding matrices in memory is infeasible in such a scenario. Standard sampling-based methods for addressing the memory constraint suffer from the dependence of the number of mini-batches on the graph size. Existing sketch-based methods and graph compression techniques operate at higher sketch ratios, with the graph compression techniques showing poor generalization, implying that different GNNs trained on the same synthetic graph have performance gaps. Sketch-based methods necessitate online learning of sketches, further increasing the complexity. In this paper, we propose a new sketch-based algorithm, PGNN, employing the Proper orthogonal decomposition (POD) method to craft update rules to train GNNs, improving the memory requirement and training time without the complication of updating the sketches during training. Experiments on standard graph datasets show that PGNN can reach much lower sketch ratios without compromising the performance. We prove the optimality of the POD update rule for the linearized GNN (SGC). Empirical findings validate our approach, demonstrating superior performance at reduced sketch ratios and adaptability across various GNN architectures." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Graph Neural Networks", "Scalability", "Proper Orthogonal Decomposition", "Sublinear Complexity" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/8a83662ca8166b418d88ca1e1ebe6502caebebf6.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/642544e83614a460f5d1ff05cce52ee2cb4005e7.pdf" }, "title": { "value": "Proper Orthogonal Decomposition for Scalable Training of Graph Neural Networks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v46TPwU0Uy
ControlVAR: Exploring Controllable Visual Autoregressive Modeling
main
Active
Autoregressive generation;Controllable image generation
applications to computer vision, audio, language, and other modalities
3;5;5
5;5;4
2;2;3
1;2;3
2;2;1
4.333333
4.666667
2.333333
2
1.666667
-0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "I appreciate the reviewer's and AC's time and effort. I decide to withdraw the submission at this step." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "In addition to the questions in the weaknesses part, I have the following questions:\n* The authors explore different model depths, which I appreciate; however, I am curious about the necessity of this exploration. What specific insights or conclusions can be drawn from varying the model depth?\n* The competition between autoregressive and diffusion models remains strong. I would like to know the authors' perspective on the advantages of using an autoregressive approach for conditional generation compared to the more mature diffusion methods." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The alternating prediction of image tokens and control tokens seems new.\n* Experiments show the effectiveness of the proposed methods.\n* The visualization is clear and helps illustrates the framework of the proposed method and the generation results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces ControlVAR, a novel framework for controllable visual autoregressive modeling, which integrates pixel-level controls to enhance conditional image generation. By transferring pixel-level controls like mask into the same RGB space like images, the control and the image can be tokenized using the same approach. The authors also leverage teacher-forcing guidance to enhance the sampling quality. Extensive experiments demonstrate the effectiveness of the proposed approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The organization of the paper lacks clarity, with some confusing aspects. For instance, at the beginning of Section 3, the notation is unclear: $C$ represents pixel-level control, while $c$ stands for token-level control, but the distinction between these two is not fully explained. Additionally, the problem formulation is introduced without any examples, making it challenging to follow. The first example of control only appears on page five, and the tokenization method is explained on page six. Before this point, it’s unclear why the number of control tokens must match the image tokens. Including examples or preliminaries earlier in the section would help readers understand these concepts. Moreover, the notation in Equation (6) is confusing—it's not immediately apparent how $x$ and Equation (6) are derived.\n* This approach also seems computationally intensive, as ControlVAR sequences are twice as long as those in VAR, leading to at least double the training and inference time. Although the authors have compared the training speed of ControlVAR with ControlNet and T2I-Adapter, the model configurations for the comparison are not provided, which makes the comparison unconvincing.\n* The citation format should be revised. Instead of relying solely on \\cite, please use \\citep or \\citet as appropriate for the context." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. ControlVAR currently requires fine-tuning when changing base models, which could limit its adaptability. Are there plans or ongoing efforts to reduce the dependency on model-specific tuning to improve scalability, particularly for scenarios where retraining large models is not feasible?\n\n2. The theoretical framework for achieving control, introduced in Section 3.2, could be expanded to provide more depth. Could you elaborate on how control is achieved within the AR setup, particularly at the pixel level, and what makes this approach effective?\n\n3. ControlVAR’s reliance on control conditions might limit its use in scenarios that require unconditioned generation. Is there a potential for adapting or extending ControlVAR to handle unconditioned tasks, and if so, could you describe how that might be approached?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. **Promising Direction for AR Model Control**: The paper addresses an important and interesting challenge—how to control autoregressive (AR) models effectively—which is valuable for the community as AR applications grow.\n2. **Well-Designed Experiments**: The experiments are thoughtfully set up for various tasks, providing a clear view of the framework’s capabilities and its performance compared to popular models.\n3. **Clear Writing**: The paper is well-written and easy to follow, making the technical aspects and contributions understandable to a broad audience." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces **ControlVAR**, a “controllable autoregressive” (AR) image generation framework designed to enhance flexibility and efficiency in conditional generation tasks, particularly as an alternative to diffusion models (DMs) due to their computational costs and latency. \n\nThe work jointly models image and pixel-level conditions, using a teacher-forcing guidance (TFG) strategy that improves controllable generation by substituting predicted values with ground truth during inference. \n\nThis approach enables adapted VAR to handle a wide range of tasks—such as control-to-image and image-to-control generation—and even unseen tasks like control-to-control generation. \n\nThe work demonstrates good performance, compared with ControlNet and T2I-Adapter across multiple pixel-level control tasks, including mask, canny, depth, and normal control." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Resource-Intensive Tuning and Limited Flexibility**: My major concern is the motivation for the work. The limitation of ControlVAR is its requirement for fine-tuning the pre-trained VAR model, which reduces the method’s flexibility and scalability. Unlike diffusion models such as Stable Diffusion, where ControlNet adds control without altering the base model’s weights, ControlVAR necessitates modifications to the underlying VAR model to enable control. This limitation makes it less practical for applications that demand flexibility across diverse base models, as switching to a new foundational model would still require retraining or extensive fine-tuning, resulting in substantial computational and resource overhead. \n2. **Lack of Adaptability**: ControlVAR’s requirement for retraining or fine-tuning when switching base models significantly limits its adaptability, particularly in settings where retraining large models is impractical. This limitation could hinder ControlVAR's adoption in real-world applications, as it restricts the flexibility needed for various scenarios. Furthermore, ControlVAR's dependence on control conditions makes it challenging to function independently of them, reducing its suitability for tasks that might require unconditioned generation. Although extensive experiments validate the framework's performance, the primary motivation for ControlVAR should be clarified further to avoid potential confusion within the community.\n3. **Delayed Method Introduction and Lack of Theoretical Depth**: The core methodology of ControlVAR is not introduced until Section 3.2, and even then, the theoretical grounding is somewhat shallow. This late introduction and limited depth in the explanation of controllable modeling make it challenging for readers to fully understand the framework’s design and motivations. The paper’s presentation could be strengthened by expanding the network modeling section with more rigorous and specific theoretical insights. A more structured approach to explaining the controllability mechanism would enhance the clarity of ControlVAR’s contributions and make the work more accessible to a broader audience. Without a thorough explanation, the framework may appear conceptually fragmented, and readers may struggle to appreciate the full scope of its methodological innovations.\n4. **Limited Novelty in Teacher-Forcing Guidance**: While teacher-forcing guidance is a core element of the ControlVAR framework, it does not introduce substantial novelty and appears to be similar to existing guidance methods, such as classifier-free guidance. Its application within ControlVAR is not notably distinct from previous uses in controllable generation, which may lessen its perceived value and impact within the paper. This similarity raises questions about the uniqueness of the guidance method as it relates to enhancing control in AR models. Without a more innovative or tailored approach, teacher-forcing guidance may come across as a standard technique rather than a breakthrough for controllable generation.\n5. **Focus and Theme Ambiguity**: The paper’s inclusion of various tasks, such as image-to-control and control-to-control predictions, diverges from its primary focus on control-to-image generation. This range of tasks blurs the paper’s central theme, making it challenging for readers to pinpoint the core contribution and focus of ControlVAR. While demonstrating versatility is valuable, the inclusion of these peripheral tasks risks diluting the main message and may confuse readers about the intended purpose of the model. Furthermore, the name “ControlVAR” suggests a potentially misleading emphasis on control-focused, multi-task learning, which may not align with the actual scope and main objectives of the framework. This ambiguity in the framework’s focus and naming could hinder its positioning within the broader field of controllable generation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "None" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Although the field of controllable image generation using autoregressive models is novel, and the experiments showcase the strong controllability of the proposed ControlVAR, the main concern for reviewer lies in the method’s scalability to other autoregressive models. If a much larger and more powerful base model emerges in the future, would it also require fine-tuning to achieve controllability? (ControlNet, after all, doesn’t require retraining Stable Diffusion.) Perhaps freezing the base model could be a solution worth exploring, and we would appreciate the authors’ perspective on this point." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.\tThis paper focuses on controllable image generation using autoregressive models, a forward-looking area with substantial application potential, providing valuable insights for the research community.\n2.\tThe paper introduces ControlVAR, which employs pixel-level controls in autoregressive modeling for controllable image generation, and employs several innovative mechanisms, such as teacher-forcing guidance (TFG) for controllable sampling.\n3.\tThe paper provides some empirical results, outperforming popular DMs, such as ControlNet and T2I-Adapter, across different conditional generation tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces ControlVAR, an autoregressive framework for conditional image generation that jointly models images and pixel-level controls, enabling flexible and efficient visual generation. By employing a teacher-forcing guidance strategy, ControlVAR facilitates controlled testing across various tasks, including control-image generation, inpainting, and image-to-control prediction. Experimental results indicate that ControlVAR performs competitively with diffusion-based models, such as ControlNet, suggesting its suitability as an alternative for controllable visual generation with potentially lower computational costs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tControlVAR requires tuning the pre-trained VAR model, which limits the flexibility of the proposed method. Switching to a new base model still requires retraining, making this approach resource-intensive. In diffusion models like Stable Diffusion, ControlNet does not modify SD’s weights. However, if there is a mature autoregressive generation model with a parameter size similar to SD in the future, ControlVAR would require retraining or fine-tuning it, which would be unacceptable in most application scenarios.\n2.\tIn Section 3.1, the author provides extensive details on a particular method of control extraction, but this content may not be suitable to occupy a significant portion of the core methodology section; it might be more appropriate to place it in the experiments section or supplementary materials for a more detailed discussion. \n3.\tSimilarly, the Tokenization part merely states that the same tokenizer as VAR is used, offering minimal informative value. \n4.\tThe concept of methods is only introduced in Section 3.2, resulting in a lack of detailed information throughout the methods section, and the interpretation is not sufficiently in-depth. The overall network modeling section could be described in a more specific and theoretically grounded manner." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@misc{\nli2024controlvar,\ntitle={Control{VAR}: Exploring Controllable Visual Autoregressive Modeling},\nauthor={Xiang Li and Kai Qiu and Hao Chen and Jason Kuen and Kai Hu and Jiuxiang Gu and Zhe Lin and Bhiksha Raj},\nyear={2024},\nurl={https://openreview.net/forum?id=v46TPwU0Uy}\n}" }, "abstract": { "value": "Conditional visual generation has witnessed remarkable progress with the advent of diffusion models (DMs), especially in tasks like control-to-image generation. However, challenges such as expensive computational cost, high inference latency, and difficulties of integration with large language models (LLMs) have necessitated exploring alternatives to DMs. This paper introduces ControlVAR, a novel framework that explores pixel-level controls in visual autoregressive (VAR) modeling for flexible and efficient conditional generation. In contrast to traditional conditional models that learn the conditional distribution, ControlVAR jointly models the distribution of image and pixel-level conditions during training and imposes conditional controls during testing. To enhance the joint modeling, we adopt the next-scale AR prediction paradigm and unify control and image representations. A teacher-forcing guidance strategy is proposed to further facilitate controllable generation with joint modeling. Extensive experiments demonstrate the superior efficacy and flexibility of ControlVAR across various conditional generation tasks against popular conditional DMs, \\eg, ControlNet and T2I-Adaptor." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Xiang_Li35", "~Kai_Qiu2", "~Hao_Chen15", "~Jason_Kuen1", "~Kai_Hu2", "~Jiuxiang_Gu2", "~Zhe_Lin1", "~Bhiksha_Raj1" ] }, "authors": { "value": [ "Xiang Li", "Kai Qiu", "Hao Chen", "Jason Kuen", "Kai Hu", "Jiuxiang Gu", "Zhe Lin", "Bhiksha Raj" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Autoregressive generation", "Controllable image generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "li|controlvar_exploring_controllable_visual_autoregressive_modeling" }, "pdf": { "value": "/pdf/3ec653aed0cc1c35ab95dd6d2c609bdaec2b183e.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "ControlVAR: Exploring Controllable Visual Autoregressive Modeling" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v49jqwmGtM
Convergence Analysis of Gradient Descent under Coordinate-wise Gradient Dominance
main
Active
Non-convex Optimization;Nash Equilibrium;Gradient Dominance;Strict Saddle
optimization
5;5;6;6
4;4;3;2
2;3;2;3
2;2;2;3
3;3;3;4
5.5
3.25
2.5
2.25
3.25
-0.904534
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See Weaknesses above" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-written, and the idea is easy to follow. The authors did a very nice job in the narrative of the paper. The explanation of their new assumption, applications of where this assumption appears, and the theoretical analysis are clearly presented." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper focuses on solving non-convex problems and, in particular, aims to design methods for finding Nash Equilibrium. The paper proposes the n-sided PL condition and provides convergence guarantees for different block coordinate descent methods (BCD, IA-RBCS, and A-RBCD) under this new assumption. Applications of where the assumption is satisfied and some preliminary experiments are presented." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I believe there are a few issues in terms of using specific terminology that are not standard, and the author might need to revise. \n\n1. The notion of Nash Equilibrium (NE) is very popular in min-max problems and multi-agent games. However, the paper focuses on pure minimization problems and defines NE as a standard concept, which is not typically the case. How is their NE equilibrium related to the minimizer of the function they are trying to optimize? This i believe was not clearly presented in the paper. \n\n2. I believe the notion G_f(x) should be more carefully presented and explain why this is a valid measure of convergence. Why is showing that f(x^t) - G_f(x^t) reduces enough to guarantee convergence to global minima (again, this is related to the notion of NE)? I would appreciate it if the authors provide more details on this. \n\n3. Why does Assumption 3.4 make sense? This looks like a very artificial condition to have just to make the proof work. This shows that one example (in Fig 1) satisfies, but that does not mean that this is a relaxed condition. More explanation is needed\n\n4. The paper is clearly theoretical, but I would appreciate it if the authors compared their proposed method with state-of-the-art approaches for training Linear Residual Networks and Infinite Horizon n-player Linear-quadratic (LQR) Game. In my opinion, this is very important for the paper as then it would convey the message that through the new assumption, we not only provide new convergence guarantees but that via the new analysis and algorithms, we can more efficiently solve these practical problems." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. It would be helpful to highlight the differences between IA-RBCD and A-RBCD that arise due to using approximate best responses.\n2. The Figure 5d is used to claim that the third case does not occur while executing the A-RBCD algorithm. I suppose the point being made is that $\\rho\\leq \\gamma-\\gamma\\frac{\\alpha^3}{13}$. A clarification regarding the same would be appreciated. If the graph could indicate the line $y=\\gamma-\\gamma\\frac{\\alpha^3}{13}$ that would make it much easier to understand." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Overall, the paper is well written and uses simple examples to provide intuition for important concepts. The strengths of the paper are as follows:\n1. The assumption $n$-sided PL condition introduced in the paper is a weaker notion than two-sided PL condition making the results more general. The assumption is weaker in the sense that under this the stationary points are not guaranteed to be global minimums.\n2. Linear convergence guarantee is proven for BCD and GD under $n$-sided PL condition and gradient alignment assumption." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper focuses on the problem of finding Nash Equilibrium for function $f(x)$ having a block form $f(x)=f(x_1, …, x_n)$. The paper introduces $n$-sided PL condition which extends the notion of PL condition to functions with block structure. Under this weaker assumption the algorithm Block Coordinate Descent is analyzed and its convergence to set of NE points is proven. Under additional assumption on the alignment of gradients of $f$ and $G_f$, linear convergence of BCD and vanilla GD is proven. Furthermore, the algorithms Adaptive randomized Block Coordinate Descent and its oracle version are proposed and their convergence rate is analyzed without the additional assumption on gradient alignment." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Insufficient empirical evidence presented for demonstrating the benefits of the proposed variant: a. Comparison with BCD method is missing for the experiments with Linear Residual Network and Infinite Horizon $n$-player LQR game. It gives empirical comparison of convergence rate of BCD and A-RBCD which would be helpful. b. The variant A-RBCD requires significantly more gradient computation than BCD. It would be beneficial to have comparison keeping the gradient computation budget constant across different methods. c. Furthermore, adding SGD and Adam as baselines would make the experiments more persuasive." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Can you explain further about what I wrote in W1? When could it be important to find Nashes in minimization problems? Or, in a slightly different perspective, could these algorithms generalize to multi-player games with adversarial components?\n- Have you checked if the proposed algorithms are capable of further avoiding saddles if possible either explicitly, or in some sort of an ‘implicit-bias’ sense? We don’t have any evidence proving/disproving this since, in the experiments in the paper, for the first quadratic case the only option is a saddle, while for the rest of the cases, we can only see $f(x) - G_f(x)$ from which we cannot distinguish saddles from minima.\n- Case 3 happens when the vectors $\\nabla f(x_t)$ and $\\nabla G_f(x_t)$ are not well-aligned (or $\\| \\nabla G_f(x_t) \\|$ is much larger than $\\| \\nabla f(x_t) \\|$… which probably won’t happen). While by definition of $G$ it seems intuitive that the two gradients will eventually align together near the stationary points (while both converging to zero vectors), are there any ideas to quantify or theoretically analyze this part a bit further?\n\nTYPO. Page 6, converges sublinearly for $f_1$ (instead of $f_2$)\n\nTYPO. Algorithm 4, $x_{-j}^t \\rightarrow x_{-j}$ (while we use $x = x^t$ in Algorithm 3)" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- I believe that extending ideas from convex/nonconvex or minimax optimization to multi-player games is a valuable direction to study and develop further. The $n$-PŁ condition might be useful in many cases, especially in handling stuff on nonconvex (and nonconcave) games.\n- Experiments are well-aligned with the theorems. The results all demonstrate and support exactly what’s in the theoretical statements.\n- The overall writing is clear and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on the convergence rates of block coordinate descent algorithms under an $n$-sided PŁ condition. Based on an adaptive, theoretical algorithm IA-RBCD that provably converges to the NE of a minimization problem of an $n$-sided PŁ objective, the paper proposes a novel algorithm A-RBCD that is implementable via computing the approximate best response and shows the same convergence speed as the ideal version. The paper also shows empirical results in various practical settings involving $n$-sided PŁ functions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **W1.** I don’t get why and when could finding an NE for a *minimization* problem be an important thing. In Figure 1, the set of stationary points (which is equal to the set of Nashes for $n$-PŁ functions) might contain saddle points. Indeed we would want to find Nashes for minimax or multi-player game settings, but for (nonconvex) minimization problems it’s usually also important to *avoid* possible saddles among stationary points (Lee et al., 2016). I get that there might be cases when we won’t care about saddles and just want to find stationary points, maybe as in the example function with only strict saddles points in Section 4, but still, I think it’s not clear in the paper about why this search for NE’s and the $n$-PŁ assumption could be a really necessary or interesting thing.\n- **W2.** In the theorems, everything related to Case 3 makes the theoretical contribution of this paper quite weak. While empirical results in Figure 5 suggest that Case 3 never really happens here, there should have been a better explanation than just saying ‘rigorously verifying these cases is intractable’. There are no lower bound results, i.e., which means that there could be pathological cases that get stuck in Case 3 for a long time, and the $n$-PŁ assumption isn’t powerful enough. In fact, we already need a stronger $(\\nu, \\theta)$-PŁ assumption to get non-asymptotic results for Case 3, which is even sublinear.\n- **W3.** Continuing from **W2**, the main results all state no more than case-specific one-step contraction inequalities. These should have been conclusively rewritten in terms of iteration complexities for a complete understanding of the convergence rate of this algorithm. This will include quantifying how rare Case 3 happens (or maybe when exactly can we make it rare), and seeing what convergence rate we get after fixing a single step size (or possibly a schedule) considering all three cases." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "-See the previous section.\n-Lines 251-252 you mentioned \"The above result ensures the convergence of BCD to the NE set, but it does not necessarily indicate whether the output converges to a point within the NE set.\" can you give an example where this is the case? i.e. we have the convergence to the NE set but not the convergence to a point in the set. \n-Line 292: can you explain why f(x) = G_f(x) if and only if x is NE.\n-Assumption 3.4. you require the condition (5) for a given set of points. But how can one know this set of points beforehand? you mentioned just after that for instance, for f_0 this assumption is valid for some domain, but the iterations of a given algorithm applied to this problem may leave this domain in the middle of the algorithm.\n-Can you tell why Ass 3.4 for instance holds for strongly convex functions?\n-In thm 3.6 in the linear rate of convergence, since \\mu, \\alpha and \\kappa \"do not depend\" on \"n\" this rate can be negative for large \"n\"?\n-Definition 3.8, do you really need the minimum to be zero?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The main strengths of this work are:\n-The authors extended the PL condition to the n-sided PL condition. \n-The authors proposed adapted variants of the gradient descent (GD) and block coordinate descent (BCD) algorithms and demonstrated the convergence to Nash Equilibrium of these methods.\n-The authors provided theoretical proofs for the convergence rates of the proposed algorithms and established conditions under which linear convergence can be guaranteed.\n-The authors provide some numerical tests to validate their claims." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work explores the optimization of finding a Nash Equilibrium for nonconvex functions using first-order gradient-based algorithms and their variations, such as block coordinate descent. The authors introduce the n-sided PL condition, an extension of the PL condition. Then, under this condition, they analyze the convergence of various variants of gradient descent algorithms. They provide theoretical proofs of convergence rates for these algorithms and examine conditions under which linear convergence can be guaranteed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weaknesses of this work are:\n-Dependence on Strong Assumptions: I understand that the authors gave some toy examples and linear NN that satisfy the assumptions however in general these assumptions remain strong and hard to check.\n-The empirical validation is focused on simple toy examples and linear NN. The performance of the proposed algorithms on large-scale or more complex problems is not deeply explored, making it uncertain how well the algorithms scale in practice. The authors may consider including some tests on bigger problems to assess the numerical efficiency of the proposed variants, even if the conditions to converge are not necessarily satisfied.\n-The paper lacks a detailed computational complexity analysis of the proposed variants (the cost per iteration compared to the classical GD or BGD). The added steps (like approximating best responses in certain cases) introduce computational overhead, especially for high-dimensional data, and it’s unclear how these adaptations perform against standard or simpler gradient-based methods in terms of cost/runtime.\n-The adaptive algorithms, particularly with the introduction of conditions for selecting step sizes and updating directions, could be challenging to implement efficiently in practice." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024convergence,\ntitle={Convergence Analysis of Gradient Descent under Coordinate-wise Gradient Dominance},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v49jqwmGtM},\nnote={under review}\n}" }, "abstract": { "value": "We consider the optimization problem of finding Nash Equilibrium (NE) for a nonconvex function $f(x)=f(x_1,...,x_n)$, where $x_i\\in\\mathbb{R}^{d_i}$ denotes the $i$-th block of the variables. \nOur focus is on investigating first-order gradient-based algorithms and their variations such as the block coordinate descent (BCD) algorithm for tackling this problem. \nWe introduce a set of conditions, termed the $n$-sided PL condition, which extends the well-established gradient dominance condition a.k.a Polyak-{\\L}ojasiewicz (PL) condition and the concept of multi-convexity. This condition, satisfied by various classes of non-convex functions, allows us to analyze the convergence of various gradient descent (GD) algorithms. \nMoreover, our study delves into scenarios where the objective function only has strict saddle points, and normal gradient descent methods fail to converge to NE. In such cases, we propose adapted variants of GD that converge towards NE and analyze their convergence rates." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Non-convex Optimization", "Nash Equilibrium", "Gradient Dominance", "Strict Saddle" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/399883408091903871689fdd8baf5eca00946b2c.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/1650d25759bad38898dcc696a6e17784c654f2bf.zip" }, "title": { "value": "Convergence Analysis of Gradient Descent under Coordinate-wise Gradient Dominance" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v4Bl6tfaaO
Bayesian-LoRA: LoRA based Parameter Efficient Fine-Tuning using Optimal Quantization levels and Rank Values trough Differentiable Bayesian Gates
main
Desk Reject
PEFT;LORA
other topics in machine learning (i.e., none of the above)
Cristian Meo;Ksenia Sycheva;Carlo Saccardi;Anirudh Goyal;Pietro Lio;Justin Dauwels
~Cristian_Meo1;~Ksenia_Sycheva1;~Carlo_Saccardi1;~Anirudh_Goyal1;~Pietro_Lio1;~Justin_Dauwels1
0
0
0
0
0
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": { "value": "Line 146 reveals the author identities, which breaks double blind review." }, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": { "value": "Submission Desk Rejected by Program Chairs" }, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@misc{\nmeo2024bayesianlora,\ntitle={Bayesian-Lo{RA}: Lo{RA} based Parameter Efficient Fine-Tuning using Optimal Quantization levels and Rank Values trough Differentiable Bayesian Gates},\nauthor={Cristian Meo and Ksenia Sycheva and Carlo Saccardi and Anirudh Goyal and Pietro Lio and Justin Dauwels},\nyear={2024},\nurl={https://openreview.net/forum?id=v4Bl6tfaaO}\n}" }, "abstract": { "value": "It is a common practice in natural language processing to pre-train a single model on a general domain and then fine-tune it for downstream tasks. However, when it comes to Large Language Models, fine-tuning the entire model can be computationally expensive, resulting in very intensive energy consumption. As a result, several Parameter Efficient Fine-Tuning (PEFT) approaches were recently proposed. One of the most popular approaches is low-rank adaptation (LoRA), where the key insight is decomposing the updated weights of the pre-trained model into two low-rank matrices. However, the proposed approaches either use the same rank value across all different weight matrices, which has been shown to be a sub-optimal choice, or do not use any quantization technique, one of the most important factors when it comes to a model's energy consumption. In this work, we propose Bayesian-LoRA, a new method that approaches low-rank adaptation and quantization from a Bayesian perspective by employing a prior distribution on both quantization levels and rank values. As a result, B-LoRA is able to fine-tune a pre-trained model on a specific downstream task, finding the optimal rank values and quantization levels for every low-rank matrix. We validate the proposed model by fine-tuning a pre-trained DeBERTaV3 on the GLUE benchmark. Additionally, we fine-tune Phi-2 and Qwen, and evaluate them on few-shot and zero-shot MMLU. We compare our proposed method with relevant baselines and present both qualitative and quantitative results, showing its ability to learn optimal-rank quantized matrices. B-LoRA performs on par with or better than the baselines while reducing the total number of bit operations by roughly 70\\% compared to the baseline methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Cristian_Meo1", "~Ksenia_Sycheva1", "~Carlo_Saccardi1", "~Anirudh_Goyal1", "~Pietro_Lio1", "~Justin_Dauwels1" ] }, "authors": { "value": [ "Cristian Meo", "Ksenia Sycheva", "Carlo Saccardi", "Anirudh Goyal", "Pietro Lio", "Justin Dauwels" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "PEFT", "LORA" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "meo|bayesianlora_lora_based_parameter_efficient_finetuning_using_optimal_quantization_levels_and_rank_values_trough_differentiable_bayesian_gates" }, "pdf": { "value": "/pdf/9f0636ccc5703170c04ccaa3bef771f5c5cfbc1b.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Bayesian-LoRA: LoRA based Parameter Efficient Fine-Tuning using Optimal Quantization levels and Rank Values trough Differentiable Bayesian Gates" }, "venue": { "value": "ICLR 2025 Conference Desk Rejected Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Desk_Rejected_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v4MTnPiYXY
Q-SFT: Q-Learning for Language Models via Supervised Fine-Tuning
main
Active
offline reinforcement learning;language models;dialogue;robotics
reinforcement learning
3;6;6;8
4;3;4;4
2;3;4;3
2;3;3;4
1;3;3;3
5.75
3.75
3
3
2.5
-0.080845
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- While the theoretical analysis shown in Section 4.3 demonstrates that the Q value learned with Equation (3) has the bounds independent from the hyperparameter \\gamma, I wonder what the ablating effect \\gamma brings in practice: Will certain configuration of \\gamma contribute the learning of a better Q-value estimation? This question has another motivation: In Equation (3), some of the probability mass is allocated to the dummy actions a', and the weight ratio between the actions in the demonstration trajectory and the dummy ones has one degree of freedom determined by \\gamma. It would therefore be great to sweep \\gamma to study the ablation effect of the weight ratio.\n - And by the way, how is the \\gamma set in the demonstrated experiments?\n\n- According to Equation (3), learning the Q-value function requires a learned behavior policy \\pi_\\beta in prior. How will the quality of the behavior policy impact the learning of the Q-values, and the performance of the ultimate policy? While I appreciate the experiments in Figure 4 where only 10% of the training data are used to get the \\pi_\\beta, I wonder how the performance grows with the quality of the initial behavior policy. For example, if the initial behavior policy is near perfect, will it be that the ultimate policy, although weighted with the learned Q-values, performs similarly with the initial one?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The work targets at the multi-turn RL scenario for LLMs, which is vital for the development of complex reasoning and agentic use in LLMs.\n- The proposed algorithm is backuped with a solid theoretical motivation.\n- The experimenting scenarios have a broad coverage, and the performance is well-demonstrated." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed Q-SFT, a method that integrates Q-learning with SFT for LLMs in the multi-turn RL setting. To exploit both the representation and the logits in LLMs learned in the pretraining stage and bypass the poor scaling effect of TD-style objectives, the authors proposed a novel weighted cross-entropy loss that embeds the learning of the Q-value into the weights. In this way, the logits prior in the pretrained LLMs can be leveraged, and no new head is needed to learn the Q-values. Theoretical analysis has shown the guarantee of the new learning objective leading to a conservative estimation of the value function. Experiments on several scenarios have demonstrated the effectiveness and efficiency of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Please see the Questions below." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please see the previous section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Using probability prediction in the Q-learning objective is cool, and it sounds particularly beneficial for large vocabulary/action space. \n- The implementation of the method is straightforward: learning two models using different objectives, and use both at inference time" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method that leverages the LLM logits learned via supervised fine-tuning to approximate the Q-learning objective. Authors argue that traditional Q-learning setup suffers from discarding the logit head, and learning a new head for Q values, so the proposed method directly translates the learned logits to Q values, with theoretical analysis on its bound and approximation. Experiments include text-based games, image-based navigation and robotic manipulation, and show some benefits of the proposed method over the prior value-based RL method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I found the pitch very confusing: it emphasizes multi-turn problems like dialogue, but the actual math formulation treats each turn as a single token in an utterance (i.e., the action space is the vocabulary space). The generation process is to generate an utterance token by token, which is conventional, and it’s confusing where the multi-turn setup plays a role other than using this generative model repeatedly to generate different utterances at different parts of the dialogue. In other words, the value function here is not the reward per utterance as stated in the intro (line 074), and question answering is also not a single-step problem as the answer is also generated token by token.\n- Experimental setup seems simplified in terms of 1) for language-based tasks, the game nature makes the effective vocabulary very small and unambiguous, 2) the well-defined reward function setup in text games do not translate into practical problems, 3) why no supervised learning results from prior work as a direct comparison?; 4) most of the datasets already have RL method outperform supervised learning which is contradictory to the intro “offline RL fine-tuning of LLMs does not reliably outperform supervised fine-tuning”. Therefore, it’s unclear whether the improvement brought by the proposed method should be contrasted to what exactly.\n- The writing can be further improved in terms of clarity, accuracy, and fixing typos. A few examples below: \n - Table 1: number from the proposed method is bold in the Wordle column, but it’s not the best performance method\n - Define h used in line 162\n - Prime should be on a, instead of argmax in line 218\n - Consistently use “RL” instead of having “RI” occurred \n - Citation should be within a pair of parentheses: line 037, 039" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Some typos: L39 citation, L84 sentence, L194 “performing”, Fig2 top left, \n- Eq2 should have a negative sign? Also tab1 -2.08 should be bold, not -2.11. \n- The figure 1 was bit too small and hard to read.\n- L153 defines RL as MDP, but what about POMDP?\n- About the theoretical guarantee, how good is the lower bound? A probability from a policy can be quite low for large vocabulary, in which case then the bound is not that tight?\n- How many models needed to be loaded during training? There is the main model, also a target model and the original model. So is that 3? How does that affect memory usage?\n- Given that some baselines use different base models, it would be nice to put them in the table. This will help if they perform better because their base model is stronger or not.\n- In fig3, the error bars correspond to different seeds?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Novelty: the method seems to be quite novel. The way it works is quite different from a typical Q-learning where L2 loss is used. Instead it relies on a cross-entropy loss that is modified. Given the importance of RLHF in LLMs and the need of multi-step interaction, a study like this can be impactful and important.\n- The experiments are quite diverse, ranging from LLM-based text tasks to VLM-based robotic manipulation. It is also compared to a diverse set of baselines, which helps to solidify the claims.\n- There is a proof about a theoretical guarantee, which is always nice to have. The method also leads to an improvement in performance in practice too, where the gap increasing as the model scales. Given the lack of value-based approaches in LLMs, it would be interesting to see if this method will adopted widely.\n- Paper is well written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new RL training method for LLMs that take advantage of the pretraining. It is based on Q-learning, but instead of attaching a new value head, the method uses the existing token probabilities as value function. There is some theoretical guarantees and diverse experimental results involving multi-steps interactions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I felt like the writing of the experimental results were a bit too short. It would have been good to have more ablation studies to understand why the method works, and in which situations it is more optimal. For example, since the main motivation is that the value head is not initialized from scratch, what will happen if we do that but keep the loss the same. Also there is not much on the training details, such as the number of training steps etc. I think the other parts can be condensed to make space. \n- Another important part that was too short was the method itself. The method is introduced between L230-261, which is about only half page. It doesn’t give enough explanation about why this method should work, so it was quite hard to understand it. For example, can you give more insight into why the proposed method should be more stable? \n- Reuse of the pretrained weights is emphasized as the main motivation, but some of the tasks actually train the model from scratch and others are not in natural language, which is a bit conflicting. There is still an improvement when trained from scratch, which is good, but there is a lack of explanation why it works better." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The motivation is clear: the instability of the Q-learning objective is a well-known challenge in the RL literature.\n- The proposed method is straightforward and demonstrates strong empirical performance, with improvements in sample efficiency and scalability over previous methods.\n- The evaluation covers a wide range of benchmarks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes Q-SFT, a new approach that enables fine-tuning of LLMs and VLMs within an offline RL framework but using a supervised learning-style loss function. The method matches or outperforms previous approaches across a range of LM fine-tuning benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "It seems there are some limitations to the proposed approach that are not addressed in the paper.\n- During inference, the approach requires double the number of forward passes—one for the original pretrained LM and one for the fine-tuned LM. This can be challenging in practical scenarios where real-time decision-making is essential, such as robotic manipulation.\n- In Theorem 4.1., the lower bound for the learned p_theta is Q* multiplied by pi_beta. If pi_beta is small, the resulting pi_theta can be overly conservative compared to the true Q*." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We present a new offline RL algorithm specifically to fine-tune pretrained LLMs and VLMs better." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024qsft,\ntitle={Q-{SFT}: Q-Learning for Language Models via Supervised Fine-Tuning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v4MTnPiYXY},\nnote={under review}\n}" }, "abstract": { "value": "Value-based reinforcement learning (RL) can in principle learn effective policies for a wide range of multi-turn problems, from games to dialogue to robotic control, including via offline RL from static previously collected datasets. However, despite the widespread use of policy gradient methods to train large language models for single turn tasks (e.g., question answering), value-based methods for multi-turn RL in an off-policy or offline setting have proven particularly challenging to scale to the setting of large language models. This setting requires effectively leveraging pretraining, scaling to large architectures with billions of parameters, and training on large datasets, all of which represent major challenges for current value-based RL methods. In this work, we propose a novel offline RL algorithm that addresses these drawbacks, casting Q-learning as a modified supervised fine-tuning (SFT) problem where the probabilities of tokens directly translate to Q-values. In this way we obtain an algorithm that smoothly transitions from maximizing the likelihood of the data during pretraining to learning a near-optimal Q-function during finetuning. Our algorithm has strong theoretical foundations, enjoying performance bounds similar to state-of-the-art Q-learning methods, while in practice utilizing an objective that closely resembles SFT. Because of this, our approach can enjoy the full benefits of the pretraining of language models, without the need to reinitialize any weights before RL finetuning, and without the need to initialize new heads for predicting values or advantages. Empirically, we evaluate our method on both pretrained LLMs and VLMs, on a variety of tasks including both natural language dialogue and robotic manipulation and navigation from images." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "offline reinforcement learning", "language models", "dialogue", "robotics" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6efdf310ad2d2f7cc8e6ef31b91b7ef9abe7ef7f.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/75144dc83b59a601431eb3b36d3d86a7f3d7e097.zip" }, "title": { "value": "Q-SFT: Q-Learning for Language Models via Supervised Fine-Tuning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v4PnwdA056
DEAN: Deactivating the Coupled Neurons to Mitigate Fairness-Privacy Conflicts in Large Language Models
main
Active
Large Language Models;Fairness;Privacy
alignment, fairness, safety, privacy, and societal considerations
3;3;5;5
5;3;3;3
2;3;3;2
2;2;2;2
3;3;3;3
4
3.5
2.5
2
3
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper studies an interesting problem of the tradeoff between privacy and fairness in fine-tuning.\n- As a mitigation, the paper further proposes a training-free method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents an approach to improve privacy and fairness simultaneously by deactivating the neurons that react to both objectives." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Theorem 1: The theoretical model assumes that they share the same activation on certain representations, abstracted through a single random variable $Z$. This is a strong assumption as the activation values for both privacy and fairness data might correlate but almost unlikely to lead to the same value. Does the same theoretical result hold if one consider the model that considered two different yet correlated representations $Z_1$ and $Z_2$ respectively for privacy and fairness data, i.e., $I(Z_1;Z_2)>0$? \n\n- Proposition 1: The proposition statement seems problematic. In inequality (2), the terms inside mutual information is an expectation, which has no randomness.\n\n- The paper discusses an interesting finding yet it will be great to further examine this finding in different setups. E.g., is it possible to improve privacy and fairness simultaneously by simply balancing the ratio between privacy and fairness data? How do privacy and fairness vary with different number of samples in SFT?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Refers to the weakness and questions." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper investigates an interesting phenomenon in supervised fine-tuning methods, where their approach enhances the LLM's privacy awareness but decreases fairness awareness.\n\n2. The presentation is well done." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes new methods to deactivate fairness and privacy neurons, improving fairness awareness compared to Supervised Fine-tuning methods such as Full Finetuning (FFT) and LoRA." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The baseline is somewhat confusing. Why don’t the authors use privacy-related and fairness-related methods? The baseline methods mentioned in this paper do not seem to provide fairness and privacy analysis. There are many privacy-enhanced algorithms, such as DP-LoRA [1]. Do these algorithms also exhibit this phenomenon?\n\n2. Identifying the related neurons seems time-consuming. Can the authors provide an efficiency comparison with other methods? The proposed method, DEAN, appears to generate masks that decouple fairness and privacy-related neurons and then apply the mask to the weights. However, it is unclear when the masking occurs.\n\n3. The definitions of fairness and privacy differ somewhat from what I know in related fields. For example, how do these methods guarantee privacy and fairness (typically, in my field, we use differential privacy and group fairness definitions)? What are the formal definitions of privacy and fairness here, and how is effectiveness measured?\n\n4. There is inconsistency in the notations. For instance, in line 208, $D_f$ and $D_p$ represent the fairness and privacy-related datasets, but in line 215 and Alg. 1, $D_b$ becomes fairness related and $D_f$ becomes privacy related dataset.\n\n5. The theory part is too simplistic just like an inequality application. \n\n[1]Yu, Da, et al. \"Differentially private fine-tuning of language models.\" arXiv preprint arXiv:2110.06500 (2021)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The paper uses the Hilbert-Schmidt Independence Criterion (HSIC) to estimate mutual information. Does it provide a detailed explanation of the parameter choices for HSIC? How might variations in these parameters affect the identification of neurons coupled with fairness and privacy?\n\n\nDoes directly deactivating coupled neurons lead to information loss? Is there a significant impact on model performance from this approach? Has the paper explored alternative deactivation methods, such as partial suppression or weight scaling, to further reduce any negative effects on overall model performance?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The authors introduce DEAN, a training-free method based on information theory that decouples fairness and privacy without needing extra fine-tuning, making it both effective and easy to understand.\nThe experiments show DEAN’s strong performance across multiple models and challenging scenarios, with noticeable improvements in fairness and privacy. It’s also robust in cases where data is limited or biased, making it practical for real-world situations where high-quality data can be hard to come by.\nThe paper is clearly structured, with straightforward explanations of the problem, DEAN’s approach, and helpful illustrations. The step-by-step breakdown makes it easy to follow and replicate.\nBy addressing both fairness and privacy at the same time, DEAN is a valuable tool for LLMs in sensitive areas. Its training-free design means it can be widely applied and easily integrated into existing frameworks without high computational costs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "It addresses the challenge of balancing fairness and privacy in large language models (LLMs). The authors observe a trade-off where enhancing privacy awareness in LLMs through standard supervised fine-tuning (SFT) often diminishes fairness awareness, and vice versa. To mitigate this conflict, the paper introduces a novel, training-free method called DEAN (DEActivate the fairness and privacy coupled Neurons). Inspired by information theory, DEAN identifies and deactivates neurons that are coupled to both fairness and privacy awareness, thereby reducing mutual information between these representations. Experimental results demonstrate that DEAN effectively eliminates the fairness-privacy trade-off, achieving significant improvements in both areas, such as a 12.2% increase in fairness and a 14.0% increase in privacy awareness for the Qwen-2-7B-Instruct model. Additionally, DEAN shows robustness even with limited annotated data or when fine-tuning data is potentially biased. The authors suggest that DEAN could be integrated into broader frameworks to develop more ethical and responsible AI systems." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the paper compares DEAN with several fine-tuning methods, it lacks a detailed performance analysis against other advanced fairness and privacy protection techniques. Including such comparisons would help clarify DEAN’s relative strengths and weaknesses.\n· The paper relies on mutual information and HSIC to identify neurons coupled with fairness and privacy, but the accuracy of this method depends heavily on the quality and representativeness of the dataset. Limited or biased datasets could lead to inaccurate identification, potentially affecting DEAN’s effectiveness.\n· The paper uses a simple, threshold-based binary classification to identify neurons associated with fairness or privacy. This approach may miss neurons that contribute to multiple tasks or whose importance varies by context. A more refined scoring system could better capture these nuanced roles, improving DEAN’s accuracy in targeting the most relevant neurons.\nDirectly deactivating coupled neurons may unintentionally disrupt other important model functions, as these neurons could also contribute to additional tasks. To minimize this risk, the authors might consider techniques like partial suppression or dynamic weight adjustments, which would allow DEAN to address the fairness-privacy conflict without sacrificing the model’s general performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. See weakness 1.\n2. Did the authors consider other, more precise methods of neuron selection? Why did they ultimately choose this mechanism based on importance scores? Were other selection mechanisms tried and their effects compared?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed method can simultaneously improve the model's awareness of fairness and privacy protection by identifying and de-activating, without sacrificing the overall performance of the model. While traditional fine-tuning methods usually require a large amount of computational resources, especially in resource-poor scenarios, DEAN does not require additional training, which is highly efficient and innovative, and provides a new way of thinking for model optimization under resource-constrained conditions.\n2. The authors tested the DEAN approach on several LLMs of different sizes and architectures (e.g., Qwen2, Vicuna, Llama2, etc.) to ensure its broad applicability and reliability. By using different types of datasets such as Beavertails, Salad-bench and Alpaca, the authors were able to evaluate the performance of DEAN in terms of fairness and privacy preservation, and to examine its effectiveness for modeling different data environments and task requirements.\n3. The paper uses several charts to visualize the effects of DEAN, for example, to show the conflict between fairness and privacy, the performance of DEAN on different models, and so on. The table also lists the fairness and privacy enhancement of each model, which makes the experimental results more intuitive and convincing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents an approach called DEAN that aims to mitigate the conflict between fairness and privacy-consciousness in large language models (LLMs) by de-activating coupled fairness and privacy neurons. It is found that enhancing privacy awareness through traditional fine-tuning methods leads to a decrease in fairness and vice versa. DEAN utilizes information theory to reduce the interplay between fairness and privacy, thereby enhancing the independence of the two. The authors conducted experiments using DEAN on several models (e.g., Qwen2-7B-Instruct, Llama2, etc.) as well as on three datasets with different dimensions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The de-activation operation may have side effects on other features of the model, such as affecting generation fluency or multitasking ability. Although the article evaluates the impact of DEAN on the overall performance of the model through several benchmarks (e.g., HellaSwag, Race, MMLU) in Table 3, generation fluency and multitasking capabilities were not specifically tested. It is recommended that the authors provide more specific assessments of generative fluency and multitasking, possibly through more specific generative tasks or benchmarks (e.g., generative coherence or cross-task accuracy metrics), to better understand the potential impact of DEAN on model performance.\n2. Although the importance scoring mechanism is simple and efficient, the lack of comparative analysis with other neuron selection methods may affect the optimal performance of DEAN. It is recommended that the authors further experiment with other neuron selection methods, e.g., using clustering algorithms (e.g., K-means or DBSCAN) to cluster neuron activation patterns and group neurons belonging to sensitive feature mappings into the same group; and using interpretive methods, such as SHAP or LIME (Local Interpretable Model-agnostic Explanations), to compute the contribution of each neuron in the fairness and privacy tasks. The authors can consult relevant paper references to compare the effects of different selection strategies on the de-activation effect to confirm whether the choice of importance scores is optimal. This will help to understand the differences in the performance of different methods in decoupling operations and may further enhance the performance of DEAN.\n3. The experiments mainly focused on generative tasks and lacked tests on common task types such as classification, question and answer, and sentiment analysis. May limit the effectiveness of DEAN in real-world applications. It is recommended to test on a wider range of task types (e.g., classification, Q&A, sentiment analysis), and it is recommended to use publicly available benchmark datasets, such as IMDB, SQuAD (Q&A), and AG News (Classification), to help assess the fairness and privacy-preserving ability of DEAN more comprehensively. Despite the challenges of obtaining real data, data from real-world scenarios can better model DEAN's performance in real-world applications." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024dean,\ntitle={{DEAN}: Deactivating the Coupled Neurons to Mitigate Fairness-Privacy Conflicts in Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v4PnwdA056},\nnote={under review}\n}" }, "abstract": { "value": "Ensuring awareness of fairness and privacy in Large Language Models (LLMs) is critical. Interestingly, we discover a counter-intuitive trade-off phenomenon that enhancing an LLM's privacy awareness through Supervised Fine-Tuning (SFT) methods significantly decreases its fairness awareness with thousands of samples. To address this issue, inspired by the information theory, we introduce a training-free method to \\textbf{DEA}ctivate the fairness and privacy coupled \\textbf{N}eurons (\\textbf{DEAN}), which theoretically and empirically decrease the mutual information between fairness and privacy awareness. Extensive experimental results demonstrate that DEAN eliminates the trade-off phenomenon and significantly improves LLMs' fairness and privacy awareness simultaneously, \\eg improving Qwen-2-7B-Instruct's fairness awareness by 12.2\\% and privacy awareness by 14.0\\%.\nMore crucially, DEAN remains robust and effective with limited annotated data or even when only malicious fine-tuning data is available, whereas SFT methods may fail to perform properly in such scenarios. We hope this study provides valuable insights into concurrently addressing fairness and privacy concerns in LLMs and can be integrated into comprehensive frameworks to develop more ethical and responsible AI systems. Our code is provided in the supplementary materials." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models", "Fairness", "Privacy" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/cc64dafae7fb1e196576a18c4f7417363c10563d.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/1cbc82ba3db8d77d76f728d2bd3f949d0e01c1dc.zip" }, "title": { "value": "DEAN: Deactivating the Coupled Neurons to Mitigate Fairness-Privacy Conflicts in Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v593OaNePQ
Learning to Search from Demonstration Sequences
main
Active
planning;reasoning;learning to search;reinforcement learning;large language model
reinforcement learning
5;5;6;10
3;4;4;4
3;2;4;4
2;2;3;3
2;2;2;4
6.5
3.75
3.25
2.5
2.5
0.420084
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. How would you extend D-TSNs to use for solving stochastic decision making problems? What modification would be required in the current work to accommodate stochastic transitions and rewards?\n\n2. Could you provide more details on how computationally expensive D-TSN is and how does the method scale w.r.t. the action space size and the search tree depth?" }, "rating": { "value": 10 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. Novel Architecture: The paper proposes a novel neural network architecture, D-TSN, which embeds the inductive bias of a best-first search algorithm, allowing for the end-to-end learning of planning components from demonstration sequences. \n\n2. Joint Learning of Planning Components: D-TSN jointly learns the encoder, value function, and world model. This is advantageous when the world model is not given but is needed to be learned from data. \n\n3. Variance Reduction Technique: Authors use an effective variance reduction technique using a telescoping sum in the REINFORCE algorithm to addresses the high variance associated with policy gradient methods. \n\n4. Comprehensive Experiments: The method is applied to a wide variety of tasks, such as reasoning problems, navigation, and game environments, supporting the claim that it is versatile and effective across domains. \n\n5. Improved Performance: The authors show that D-TSN outperforms baselines, showing its problem solving performance in challenging tasks with limited supervision, especially in jointly learned world model settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces the Differentiable Tree Search Network (D-TSN), a novel neural network architecture designed to learn search strategies from sequences of demonstrations without access to explicit search trees. D-TSN integrates the inductive bias of a best-first search algorithm into its structure, enabling the joint learning of essential planning submodules, including an encoder, value function, and world model. To construct the search tree, the authors employ a stochastic tree expansion policy, formulating it as a decision-making task optimized via the REINFORCE algorithm. They introduce a variance reduction technique using a telescoping sum to address high variance in gradient estimates. D-TSN is applicable in scenarios where the world model is known or needs to be jointly learned with a latent state space. The authors evaluate D-TSN on tasks from both scenarios, including the Game of 24, a 2D grid navigation task, and Procgen games. Experimental results demonstrate that D-TSN is effective, particularly when the world model with a latent state space is jointly learned, outperforming baselines in terms of success rate and generalization capabilities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited to Deterministic Environments: The current implementation is restricted to deterministic decision-making problems with discrete action spaces.\n\n2. Computational Complexity: The computational complexity for the approach might be high, because it consists of constructing search trees and performing REINFORCE updates. This can be a problem especially when applying for deeper trees or larger action spaces.\n\n3. Scalability: scalability is not thoroughly analyzed for longe-horizon tasks or higher dimensional state space." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Why did the authors choose to use REINFORCE? Did the authors experiment with using methods like Gumbel-Softmax for sampling?\n- How are the compute budgets between model-based search, model-free Q network and D-TSN equalized for a fair comparison?\n- In Line 22, the authors say they “introduce an effective variance reduction technique”. Isn’t this just Guez et al. (2018)? The way this is written in some places suggests that this is novel.\n- Line 45,46 seems contradictory with lines 48,49. If learning from demonstration sequences fails due to compounding errors due to the agent getting into states not during training, then the training distribution of the proposed method is also not sufficiently covered by the training distribution. I understand the CQL term attempts to address this issue.\n- What are the scores of the PPG sub-optimal policy?\n- Table 3 reports Mean Scores and Mean Z-Scores, but no standard deviations or error bars?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- Builds on TreeQN and improves a significant limitation of the prior work, i.e., having a fixed tree structure. In reality, search algorithms should attempt to filter large action spaces and focus computation on promising variations in the tree. The proposed work gets around this limitation by sampling from the action space, and using REINFORCE to differentiate through the discontinuity of sampling.\n- Strong empirical evidence that the proposed method improves on TreeQN, and having the modules trained separately.\n- Strong ablation results showing the effectiveness of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel differentiable neural tree search architecture that can be learned directly from trajectories of data in a supervised manner. It essentially introduces a search-like inductive bias within the weights of the neural network. The proposed algorithm builds on TreeQN (Farquhar et al., 2018), and is crucially different since the proposed method allows building a tree structure that can stochastically sample from the action space, and not just be a fixed tree structure as in TreeQN. The authors empirically evaluate the strength of their approach by comparing against a Model-free Q network, a Model-based search method that trains the individual modules separately, and TreeQN, and report performance gains in various RL environments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Seeing how the approach handles large action spaces remains an empirical question, since currently, there is no policy that directly outputs a distribution over actions, instead the method requires the application of the transition network and the reward network for every action, which is not scalable to settings like, say, Go.\n- The proposed method is mostly applicable to discrete action spaces with deterministic environments. Improving on this remains a future empirical question." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "+ How is the full expansion mechanism in this work not a shallow tree (in comparison to the TreeQN) that you mentioned in the RELATED WORKs section since you mentioned that you have a fixed depth in your search protocol?\n\n+ When the encoder, value, reward, and transition functions are unavailable, how do they get jointly optimized with the search algorithms?\n\n+ In paragraph III, the justification for why end-to-end learning improves the reliability of world-models used in preconditioning the search process is not given. Please address this.\n\n+ Lines 124-126: \"Dividing the network into these submodules reduces\ntotal learnable parameters and injects a strong search inductive bias into the network architecture, preventing overfitting to an arbitrary function that may align with the limited available training data.\" Where is the proof for this?\n\n+ Is there no way the illustration in Figure 2 can be moved to the main text from the Appendix? Probably as a subfigure on page one or so would be really nice! Also, I think the algorithm should justifiably be compressed (e.g. as a pseudocode or flow chart) somewhere in the main text. An elaborate version can be embedded in the Appendix if space is an issue.\n\n\n+ What metric are you using to report the measured quantities in Table 1? Can you include it in the heading/sub-headings?\n\n\n+ Appendix B, Lemma B.3: I'm sorry, is there supposed to be a running sum over all of x in the equation you wrote? \n\n+ Line 915, I think you meant Lemma B.3, not Theorem B.3, no?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ This paper introduces a differentiable treee seach and planning network to alleviate the suboptimal search results that arise when using only action and observation sequences to learn to plan in many modern data-driven plannning problems. The tree search basically synergizes submodules in the form of an encoder, value, reward and transition functions from a network by inducing the bias of a best first search into the neural architecture.\n\n+ I love the motivation stated for constructing an search tree expansion policy that is stochastic in nature. But the justification for why it ensures the continuity of the loss function when the search tree is not fixed is missing. I am referring to lines 55-56.\n\n+ I love the conceptualization and the synergy of REINFORCE, mathematical mechanisms to reduce variance in the REINFORCE Loss owing to possibly biased estimates, the continuity proof (though I have questions hanging over the proof of Lemma B.3 to be fully satisfied with this poposition) .\n\n+ I love that the conclusion section meticulously summarizes the problem, contributions, and shortcomings. Kudos to the authors." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a differentiable treee seach and planning network to alleviate the suboptimal search results that arise when using only action and observation sequences to learn to plan in many modern data-driven plannning problems. The tree search basically synergizes submodules in the form of an encoder, value, reward and transition functions from a network by inducing the bias of a best first search into the neural architecture.\n\nArguing that for a slight perturbation in network parameters, the implementation of the loss function in wuation (2) could generate a tree structure that causes the loss function to become noisy, the authors equated this to a lack of continuity in the loss function space. I think they mistook stochasticity in gradient propagation with discontinuity. The whole premise of the contribution of the paper is based on this assumption that is barely proven to be true or false before the authors dived into a host of mathematical analysis that resulted in the loss function on Line 272 (please number all your equations to allow for easy referencing in the future). As a result of this key oversight, it is not clear to this reviewer that the whole contribution of the paper is warranted. \n\nAs a result, I am assigning a score of fair to this contribution until I see a well-laid out argument for why this new invention is necessary." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While I do love the mathematical contributions of the paper, I think there are essentials that are missing in the logic, organization, and flow of arguments that require a thorough review before this paper makes it into an acceptance. A principal one is the following (mentioned in the summary box but repeated here. The authors would do well to alleviate my concerns): Arguing that for a slight perturbation in network parameters, the implementation of the loss function in wuation (2) could generate a tree structure that causes the loss function to become noisy, the authors equated this to a lack of continuity in the loss function space. I think they mistook stochasticity in gradient propagation with discontinuity. The whole premise of the contribution of the paper is based on this assumption that is barely proven to be true or false before the authors dived into a host of mathematical analysis that resulted in the loss function on Line 272 (please number all your equations to allow for easy referencing in the future). \n\n+ The claim in the last paragraph of Theorem 3.1 that slight changes in the network parameters could cause discontinuity in the tree structure seems anecdotal and not backed up by a solid proof. I would love to see a concrete reasoning (analytical proof or abundant empirical proofs) behind this claim that warrants section 3.5 and Appendix C.\n\n+ Grammatical errors fly out of the page hither and yon throughout the paper; the uthors would do well to carefully organize their arguments, present their logic convincingly throughout the paper, and punctuate and label every equation appropriately!\n\n+ The logic in the paper could use a more thoughful presentation. Here is an example critique:\n - In the \"introduction\", it is stated in the first paragraph that constructing a search tree from expert data is infeasible due to lack of practicality or scarcity. The authors make an assumption that a search tree is a principal prerequisite for information retrieval (IR) without any justification as to why it may be better than alternative IR methods. Then in the second paragraph, they mentioned how search and planning could be better executed in the presence of a simulator or world mode. While I find this premise alluring, I find it disingenious that the authors claimed that the search could be incomplete because the search process may visit regions unexplored during training. I think the reasoning here is incomplete and should be revisited by the authors." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Do differences in the convergence rates of submodules exist? If present, could these differences impact the overall network's performance?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "i) This work introduces the integration of the algorithmic inductive bias from the best-first search algorithm into a differentiable network, enabling automatic guidance of the tree search process through gradient backpropagation.\n\nii) This work underscores the significance of maintaining continuity of both the parameter space and the loss function which are dependent on the tree structure. To address this, the authors advocate for the adoption of a stochastic expansion policy to fulfill these prerequisites. \n\niii) The experiment results are compelling. And it is particularly noteworthy to see the success achieved in tasks involving LLM." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This study presents a new differentiable tree search network comprising multiple submodules to estimate Q values. The network is optimized by minimizing the distance between the estimated Q values and the ground-truth Q values within a provided dataset. To address the substantial changes in the tree structure resulting from updates to the Q value function, a stochastic expansion policy is introduced to guide expansions in the search process. This policy ensures the continuity of the parameter space irrespective of changes in the tree structure." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "i) Previous methods have introduced diverse differential network architectures to integrate different search algorithms into networks, as mentioned in the related works. It is unsurprising that integrating the best-first algorithm into the network has also yielded success. Thus, it would be beneficial to compare the architectural variances between this method and previous methodologies.\n\n\nii) This work trains the overall network using an offline dataset. However, as extensively deliberated in preceding offline RL studies, this paradigm may get stuck when facing out-of-distribution states or actions. Thus, a comparative analysis between online training and offline training for the newly proposed network architecture could provide valuable insights." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a method that constructs search tree in a differetiable manner, and can be trained from just sequence demonstrations." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning to Search from Demonstration Sequences},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v593OaNePQ},\nnote={under review}\n}" }, "abstract": { "value": "Search and planning are essential for solving many real-world problems. However, in numerous learning scenarios, only action-observation sequences, such as demonstrations or instructions sequences, are available for learning. Relying solely on supervised learning with these sequences can lead to sub-optimal performance due to the vast, unseen search space encountered during training. In this paper, we introduce Differentiable Tree Search Network (D-TSN), a novel neural network architecture that learns to construct search trees from just sequences of demonstrations by performing gradient descent on a best-first search tree construction algorithm. D-TSN enables the joint learning of submodules, including an encoder, value function, and world model, which are essential for planning. To construct the search tree, we employ a stochastic tree expansion policy and formulate it as another decision-making task. Then, we optimize the tree expansion policy via REINFORCE, and introduce an effective variance reduction technique for the gradient computation. D-TSN can be applied to problems with a known world model or to scenarios where it needs to jointly learn a world model with a latent state space. We study problems from these two scenarios, including Game of 24, 2D grid navigation, and Procgen games, to understand when is D-TSN more helpful. Through our experiments, we show that D-TSN is effective, especially when the world model with a latent state space is jointly learned." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "planning", "reasoning", "learning to search", "reinforcement learning", "large language model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f607d2ad853f8a143e1081061c8daa0635beeadf.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Learning to Search from Demonstration Sequences" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v5BouOktUP
Multivariate Time-series Forecasting with SPACE: Series Prediction Augmented by Causality Estimation
main
Active
Time Series Forecasting;Causal Learning;Transfer Entropy;Graph Based Learning
causal reasoning
3;3;3;5
4;4;5;4
3;2;2;2
2;2;2;3
1;3;3;3
3.5
4.25
2.25
2.25
2.5
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Causality Understanding:\n- How does the paper differentiate between correlation and causation, particularly in the first paragraph's examples?\n- Can we truly claim a direct causal relationship between sporting events and electricity usage, or is this more of a correlation due to viewer behavior?\n2. Terminology Clarity:\n- What specific meaning does \"dissimilar information\" convey in Figure 1?\n- What evidence supports the superiority of causal approaches over similarity-based methods?\n- How are these differences quantified and demonstrated?\n3. Generalizability Claims:\n- What specific types of time series data does this approach effectively handle?\n- What evidence supports the claim of applicability to \"a large class of time series data\"?\n- What are the limitations or boundaries of this approach?\n4. Model Evaluation:\n- How do individual components, particularly the residual connections, contribute to the model's performance?\n- What would ablation studies reveal about the preservation of time step information?\n- Which components are essential versus optional for model success?\n5. System Complexity:\n- How does the model account for indirect relationships in weather systems, such as the cloud cover-precipitation-temperature chain?\n- How are feedback loops and mutual influences between variables handled?\n- Does the model oversimplify complex environmental systems?\n6. Causality Validation:\n- What methods were used to establish true causality rather than correlation?\n- How were potential confounding factors identified and controlled?\n- What role did temporal sequences play in establishing causal relationships?\n- Were controlled experiments or alternative validation methods considered?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper introduces SPACE (Series Prediction Augmented by Causality Estimation), a novel model for multivariate time series forecasting. SPACE addresses three key characteristics of multivariate time series: causal relationships rather than mere similarities, information across multiple independent factors, and inherent temporal dependencies. The model integrates several components, including a Sequence Enhancer using attention mechanisms, a Cross-TE module that computes transfer entropy to capture causal relationships, and a Causal Graph Neural Network (CGNN) that uses the causality matrix as an adjacency matrix. The authors argue that conventional time series analysis methods often fail to capture these complex relationships, leading to incomplete or misleading conclusions.\n\nExperimental results demonstrate that SPACE outperforms eight state-of-the-art baseline models on nine real-world datasets. The model shows improved performance across various prediction lengths and datasets, including weather, electricity, and financial data. The authors emphasize that the integration of causal information is essential for improving forecasting performance in complex, real-world time series data. Additionally, they claim that SPACE enhances the interpretability of forecasts, especially on weather-related data, by capturing and visualizing causal relationships between different variables." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces SPACE (Series Prediction Augmented by Causality Estimation), a novel model for multivariate time series forecasting. SPACE addresses three key characteristics of multivariate time series: causal relationships rather than mere similarities, information across multiple independent factors, and inherent temporal dependencies. The model integrates several components, including a Sequence Enhancer using attention mechanisms, a Cross-TE module that computes transfer entropy to capture causal relationships, and a Causal Graph Neural Network (CGNN) that uses the causality matrix as an adjacency matrix. The authors argue that conventional time series analysis methods often fail to capture these complex relationships, leading to incomplete or misleading conclusions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Oversimplification of causality: The paper appears to oversimplify the concept of causality and its role in prediction tasks, potentially conflating correlation with causation in some instances. In paragraph 1, the example does not clearly demonstrate direct cause-and-effect relationships. For instance, a sporting event doesn’t directly cause increased electricity usage - it’s correlated with increased usage due to more people using electrical devices to watch the event.\n\n2. Lack of precision in terminology: The use of terms like “dissimilar information” in Figure 1 is vague and doesn’t clearly explain what causal information might be captured that similarity-based approaches miss. In addition, the paper seems to imply that causal approaches are superior to similarity-based approaches in all cases, without providing sufficient evidence for this claim.\n\n3. Overgeneralization: The authors make broad claims in their first contribution about the applicability of their approach to “a large class of time series data” without specifying the types of data or providing adequate evidence.\n\n4. Absence of ablation studies: There’s no mention of ablation studies to demonstrate the impact of specific components, such as the residual connections (line 236-238), on the model’s performance and preservation of time step information.\n\n5. Oversimplification of complex systems: The paper seems to underestimate the complexity of systems like weather (Figure 3), suggesting that simple causal relationships can capture their full complexity. In this example: Some of the described relationships may be indirect. For example, increased cloud cover associated with precipitation could lead to decreased solar radiation and temperature, rather than the precipitation itself directly causing these changes. In addition, weather systems often involve feedback loops where variables influence each other in complex ways. For instance, increased humidity can lead to more precipitation, which in turn affects humidity levels.\n\n6. Insufficient rigor in establishing causality: The paper doesn’t describe a sufficiently rigorous approach to establishing true causality, which would require consideration of potential mechanisms, controlled experiments (where possible), and careful examination of temporal sequences and potential confounding factors." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The dataset used in the experiments are rather toy small set. What are the experiment results of SPACE on larger dataset, such as NY taxi or climate datasets? \n\nCan you add more recent baselines, such as TSmixture? https://arxiv.org/abs/2303.06053" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ Causal structures is an important aspect for time series forecasting. Most exsiting work did not take it into consideration\n+ The authors considers scalability when designing the model. \n+ The proposed idea is reasonable and some encouraging experiment results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents an end-to-end trainable Series Prediction model Augmented by Causality Estimation, namely SPACE, to incorporate temporal dependencies and causal relationships in time series forecasting. SPACE utilizes a temporal embedding and a transfer entropy module in the hope to capture the causal structures within multivariate time series for better forecasting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While considering causal structures is a good idea, the way SPACE infers the causal structures is not fully convincing. The paper goes a long way to discuss issues with existing work, especially those utilizes granger causality, ad argues for transfer entropy. Since transfer entropy (TE) is difficult to calculate, the authors proposed to use pseudo TE which assumes that \"the time series follow the normal distribution\". This is an extremely strong assumption and not \"acceptable assumption for real-world time series\". \n\n- The improvement SPACE achieves over state-of-art methods are very marginal. It is not clear whether it is worthwhile to go with such a complicated model but very limited improvement." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Can the authors clarify the specific problem in time-series causality extraction that SPACE addresses, beyond combining existing modules?\n2. Could additional ablation studies be provided to isolate and validate the causal contributions of Cross-TE versus standard attention mechanisms?\n3. Are there any plans to release additional experiments or dataset evaluations, specifically with *Traffic* and *Electricity*, to strengthen model generalizability?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. **Innovative Approach**: Integrates causality estimation with multivariate time-series forecasting, which is a unique perspective.\n2. **Real-World Application**: Demonstrates effectiveness in real-world, multivariate forecasting tasks, which may offer practical benefits." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents the SPACE model, an approach for multivariate time-series forecasting that integrates causality estimation to capture complex interdependencies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Dataset Coverage**: Commonly used datasets, such as *Traffic* and *Electricity*, are missing from the evaluation, limiting the scope of comparison.\n2. **Efficiency Claims**: While SPACE is described as highly efficient, there is no empirical comparison of runtime with established models like iTransformer and DLinear.\n3. **Unverified Claims in Figures**: Figure 1 implies that the attention mechanism only focuses on similar series, a claim not substantiated by experiments or theory.\n4. **Notation and Writing Inconsistencies**: The paper has numerous small errors in notation (e.g., non-italicized symbols) and inconsistent symbol usage, particularly in the methodology section (e.g., line 276 “N” should be italic, lines 274 and 284 should use “P_N,” and line 285 should italicize “T”). Such inconsistencies affect readability and technical accuracy.\n5. **Causal Adjacency Matrix Evaluation**: It is unclear in Figure 3 how the learned adjacency matrix by Cross-TE outperforms traditional attention in identifying causal relationships. Figure 1 claims attention ignores dissimilar information, yet Figure 3 does not convincingly demonstrate that Cross-TE resolves this.\n6. **Clarity in Equation 1**: Equation 1 lacks clarity in summation notation, as it’s unclear which part of the formula is summed over. Additionally, the notation for \\(i\\) as a time step index raises questions about why an additional superscript is needed to denote the previous time.\n7. **Inconsistencies in Variable Definition**: In the problem definition, \\(x\\) is defined as a time series, \\(X\\) as historical sequences, and \\(Y\\) as future sequences. However, line 263 redefines \\(y\\) as historical, which can be misleading and suggests potential label leakage. The overall presentation of the methodology is unclear and could benefit from consistent use of symbols and fonts.\n8. **Perceived Model Complexity**: Without clear experimental support for its causal relationship advantages, SPACE may appear as a combination of existing techniques (PTE, attention, GCN, and Time Mixers) without sufficient innovation in causality extraction." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the Weaknesses part." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "-\tStrong motivation: The paper presents an interesting method by providing a causative perspective for analyzing time series.\n-\tGood clarity: The writing is clear and well-structured, making the paper easy to follow and understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces SPACE to forecast MTS enhanced by causality estimation. \nSpecifically, this method captures causal relationships by a transfer entropy-based cross-TE module and a casual GNN." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Lack detailed comparisons with highly related works. \n - The proposed method relies heavily on patch mixers and time mixers (as described in Equation 6). However, state-of-the-art mixer-based methods, e.g., TimeMixer [1] and Timexer [2], are neither discussed in the related work nor included in the experimental comparisons.\n - Since one of the main contributions is about causal GNN, some GNN related works, e.g., CrossGNN [3], should be mentioned and compared.\n- Lack a comprehensive efficiency study.\n - The method introduces the fast-pTE algorithm and emphasize its efficiency advantage in contributions. It is recommended to provide detailed theoretical analysis or discussion and conduct corresponding ablation studies about variants with TE, pTE, and fast-pTE. \n - The efficiency study about the proposed work and SOTA works are also necessary, which can better demonstrate practical benefits of the proposed work.\n- Need more evaluations about the learned relationships. \n - The authors claim that Cross TE is “designed to dynamically learn and adapt to evolving causal dependencies while preserving memory of past relationships.” However, the current evaluations are insufficient to convincingly demonstrate this claim. Relying on a single sample of the learned adjacency matrix (as shown in Fig. 3) is not enough to explain the dynamic nature of causality. Additional evaluations focusing on dynamic causal dependencies should be provided.\n - It is also recommended to include examples from real-world datasets that showcase causal relationships. Presenting these examples in a visual format similar to Fig. 1 would help clarify the learned relationships.\n- Should enhance presentations.\n - In Fig. 2, subfigures (a) and (b) seem unnecessary. It is more valuable to provide a detailed illustration of the Entropy Graph construction, as it is closely related to the paper’s main contribution.\n - For Fig. 3, adding variable names instead of numbers on the axes can enhance clarity and understanding.\n\n[1] Wang, S., Wu, H., Shi, X., Hu, T., Luo, H., Ma, L., ... & ZHOU, J. 2024. TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting. In The Twelfth International Conference on Learning Representations.\n\n[2] Wang, Y., Wu, H., Dong, J., Liu, Y., Qiu, Y., Zhang, H., ... & Long, M. 2024. Timexer: Empowering transformers for time series forecasting with exogenous variables. arXiv preprint arXiv:2402.19072.\n\n[3] Huang, Q., Shen, L., Zhang, R., Ding, S., Wang, B., Zhou, Z., & Wang, Y. 2023. CrossGNN: Confronting noisy multivariate time series via cross interaction refinement. Advances in Neural Information Processing Systems, 36, 46885-46902." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024multivariate,\ntitle={Multivariate Time-series Forecasting with {SPACE}: Series Prediction Augmented by Causality Estimation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v5BouOktUP},\nnote={under review}\n}" }, "abstract": { "value": "The analysis of multivariate time series (MTS) presents a complex yet crucial task with substantial applications in areas such as weather forecasting, policy formulation, and stock market prediction. It is important to highlight three key characteristics of MTS that contribute to the challenging and multifaceted nature of their analysis: (i) their interrelationships are represented through causal relationships rather than mere similarities; (ii) they convey information across multiple independent factors; and (iii) their dynamics often arise from inherent temporal dependencies. While conventional time series analysis frameworks often fail to capture one or more of these aspects, resulting in incomplete or even misleading conclusions, we propose an end-to-end trainable $\\textbf{S}$eries $\\textbf{P}$rediction model $\\textbf{A}$ugmented by $\\textbf{C}$ausality $\\textbf{E}$stimation (SPACE) to address these limitations. This model effectively incorporates temporal dependencies and causal relationships, featuring a temporal embedding and a transfer entropy-based Cross-TE module designed to enhance predictions through causality-augmented mechanisms. Experiments demonstrate that SPACE achieves state-of-the-art results on challenging real-world time series prediction tasks, showing its effectiveness and versatility." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Time Series Forecasting", "Causal Learning", "Transfer Entropy", "Graph Based Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/fd62db3a4e937fb3bc598cbae54038fef716c955.pdf" }, "presentation": null, "primary_area": { "value": "causal reasoning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Multivariate Time-series Forecasting with SPACE: Series Prediction Augmented by Causality Estimation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v5JrYUdMxc
Hybrid Fourier Score Distillation for Efficient One Image to 3D Object Generation
main
Withdraw
3D Generation;One Image to 3D Generation
generative models
Shuzhou Yang;Yu Wang;Haijie LI;Jiarui Meng;Yanmin Wu;Xiandong MENG;Jian Zhang
~Shuzhou_Yang1;~Yu_Wang85;~Haijie_LI2;~Jiarui_Meng1;~Yanmin_Wu1;~Xiandong_MENG1;~Jian_Zhang22
3;3;3;5
4;5;5;4
2;1;2;3
2;2;1;3
3;3;2;4
3.5
4.5
2
2
3
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Do you consider applying the high-pass filter over the Fourier domain?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The exposition is very clear, and the work is comfortable to read. \n2. The motivation of the approach is clear and reasonable. \n3. The overall framework design seems to be natural and reasonable following the basic idea.\n4. The idea is validated on two frameworks (DreamGaussian and DreamFusion), generating consistent performance improvement." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The submission proposes a pipeline to generate 3D Gaussian representation from a single image using a score distillation-based method by generating high-frequency details leveraging the Fourier domain and ensuring cross-view consistency leveraging the novel view synthesis ability of Zero-1-to-3. To the review's knowledge, the perspective is new, and the results seem promising, among score distillation-based methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The field of 3D generation has been shifting from score distillation approaches toward feed-forward methods. The manuscript would benefit from a more comprehensive discussion of this methodological evolution and its implications for the current work.\n2. While the paper mentions Wonder3D, it lacks direct experimental comparisons with this significant baseline. Additionally, Unique3D, another prominent method known for high-quality frontal appearance generation, should be included in the comparative analysis. Both Wonder3D and Unique3D have publicly available implementations, making such comparisons feasible." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "L522: I suggest using threestudio's implementation instead, which is a common practice and performs much faster and better than Zero123's results in the paper." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The method is well-motivated by insights from the frequency domain.\n\n2. It proposes a better alternative loss for SDS, which performs better when combined with Zero123 SDS." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method for single image to 3D named Fourier123. The method inherits from Magic123 but changes SDS loss to the frequency domain and only keeps the amplitude components. The adapted version is coined hy-FSD. It is applied on DreamFusion and DreamGaussian. It achieves better CLIP-Sim than the original SDS." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Low-quality texture: The appearance of 3D models looks noisy, even on colors that are supposed to be uniform. It does not look more preferable to InstantMesh, which is much faster.\n\n2. As an optimization-based method, it is much slower than inference-only methods like Stable Fast 3D, which only takes seconds and may have better geometry and appearances.\n\n3. Insufficient quantitative experiments: No 3D metrics are reported, such as CD, F-Score, Vol. IoU. It is only evaluated on 100 objects. Since most of the competitive baselines take no longer than a few minutes per shape, it would be proper to evaluate at least a few hundred objects on different datasets like GSO, Omni3D, etc." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "DFT amplitudes of a single image (or the $\\epsilon$ noise) is very noisy. I'm unsure if using such a noisy gradient could really improve the score distillation quality fundamentally.\n\nFor rebuttal, please address the weaknesses listed above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- Using DFT and defining a score distillation loss on the frequency amplitudes is a very original attempt at restoring texture details in the absence of accurate spatial (phase) alignment. Modeling frequency domain features is an interesting direction in generative models and I believe it's definitely worth exploring.\n\n- The writing and clarity of the manuscript is good in general." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This manuscript proposes an image-to-3D generation pipeline using pretrained diffusion models. The main idea is, while generic image diffusion models (SD) and finetuned novel view models (Zero-1-to-3) can be used jointly to optimize the 3D content via score distillation, SD provides better texture details while Zero-1-to-3 is 3D consistent but blurry. In order to better combine the strengths of these two diffusion models, the manuscript proposes Fourier score distillation (FSD), i.e., defining the score distillation loss on the frequency amplitudes. FSD is therefore applied to the SD model for preserving the texture details." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- One of the central assumptions, that novel-view models produce over-smooth results, is very outdated. This paper has only experimented with Zero-1-to-3, which was the first ever generalizable novel-view generative model. Nowadays we have Zero123++, CAT3D, and a lot of video models, some of them are open-source as well. These more recent models can already generate detailed novel views while also being more 3D consistent than Zero-1-to-3. \n\n- While the proposed Fourier score distillation method is claimed to be able to improve the visual details, I find the qualitative results very underwhelming. All images in figure 4 look blurry and cartoonish to me. In Fig. 5, Magic123 clear has better texture details, despite some failure cases that can be attributed to weak global 3D consistency.\n\n- Quantitatively, the evaluation metrics also cannot directly reflect the effect of the proposed FSD in improving texture details. CLIP-similarity is used for ablation studies but it is mostly reflecting the overall semantic alignment rather than texture details. The user studies also don't provide an aspect for texture details. In general, none of the experimental results can adequately support the central claim of the paper.\n\n- There seems to be a small error with Eq 9: if z is the denoising output as defined in L231, then there should also be an Jacobian term that backpropagates the gradient from the amplitudes to the spatial domain." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Refer to the weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The main strength is that the utilization of the amplitude of Fourier transformation in SDS loss is novel. This paper claims that Zero123 produces better structures while SD produces better details. Then, the proposed method supervises renderings using only the amplitude components of the predicted noises to extract structures of generation. This idea is overall novel and new for me." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new SDS-based method to generate 3D models from single-view images. \nThe key idea is to incorporate both Zero123 and the original Stable Diffusion in the distillation process.\nZero123 will be in charge of generating structures in the distillation so the proposed method would use the amplitude of the predicted noise to optimize the rendering. Stable Diffusion will be in charge of generating fine details so a vanilla SDS is adopted.\nExperiments demonstrate that proposed distillation on amplitude improves the results and show some better performances than baseline methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weakness is that the experiments are not convincing enough to demonstrate the effectiveness of the idea.\n\n1. Lack of discussion on multiview generation methods. Distilling multiview diffusion models like MVDream often produce better results than those only using SD or Zero123, which is not discussed in the paper. Recent 3D generation methods are almost dominated by these multiview diffusion models and though the proposed method achieves better results, it does not show the potential to outperform the existing multiview diffusion methods.\n2. The claim that Zero123 always has better structures is not well demonstrated. Only one example in Fig. 2 is shown. Even though this is validated by more examples, this observation may be strongly limited to the specific trained Zero123 model instead of being a general and fundamental attribute of any other diffusion model. Thus, the observation will be too restrictive with limited impact.\n3. The results of InstantMesh and direct distillation Zero123 are not convincing. According to my experience distilling Zero123 using ThreeStudio, the results of Zero123 would be much better than the qualitative results shown in Fig. 5. I strongly suggest the authors double-check whether the implementation of the baseline method is correct or not." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Using both 2D and 3D generation priors to generate 3D from a single image with hybrid fourier score distillation" }, "_bibtex": { "value": "@misc{\nyang2024hybrid,\ntitle={Hybrid Fourier Score Distillation for Efficient One Image to 3D Object Generation},\nauthor={Shuzhou Yang and Yu Wang and Haijie LI and Jiarui Meng and Yanmin Wu and Xiandong MENG and Jian Zhang},\nyear={2024},\nurl={https://openreview.net/forum?id=v5JrYUdMxc}\n}" }, "abstract": { "value": "Single image-to-3D generation is pivotal for crafting controllable 3D assets. Given its under-constrained nature, we attempt to leverage 3D geometric priors from a novel view diffusion model and 2D appearance priors from an image generation model to guide the optimization process. We note that there is a disparity between the generation priors of these two diffusion models, leading to their different appearance outputs. Specifically, image generation models tend to deliver more detailed visuals, whereas novel view models produce consistent yet over-smooth results across different views. Directly combining them leads to suboptimal effects due to their appearance conflicts. Hence, we propose a 2D-3D **hy**brid **F**ourier **S**core **D**istillation objective function, **hy-FSD**. It optimizes 3D Gaussians using 3D priors in spatial domain to ensure geometric consistency, while exploiting 2D priors in the frequency domain through Fourier transform for better visual quality. hy-FSD can be integrated into existing 3D generation methods and produce significant performance gains. With this technique, we further develop an image-to-3D generation pipeline to create high-quality 3D objects within one minute, named **Fourier123**. Extensive experiments demonstrate that Fourier123 excels in efficient generation with rapid convergence speed and visually-friendly generation results." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Shuzhou_Yang1", "~Yu_Wang85", "~Haijie_LI2", "~Jiarui_Meng1", "~Yanmin_Wu1", "~Xiandong_MENG1", "~Jian_Zhang22" ] }, "authors": { "value": [ "Shuzhou Yang", "Yu Wang", "Haijie LI", "Jiarui Meng", "Yanmin Wu", "Xiandong MENG", "Jian Zhang" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "3D Generation", "One Image to 3D Generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "yang|hybrid_fourier_score_distillation_for_efficient_one_image_to_3d_object_generation" }, "pdf": { "value": "/pdf/14eedcb0a99030c137fb79d77db3c43ba637a5f0.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/b4ebc2df6ce7a0032b75a34082f67eb4334c6c0d.zip" }, "title": { "value": "Hybrid Fourier Score Distillation for Efficient One Image to 3D Object Generation" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v5bK7cQch3
Learning 3D Medical Image Models From Brain Functional Connectivity Network Supervision For Mental Disorder Diagnosis
main
Active
3D medical image;functional connectivity network;contrastive learning;mental disease diagnosis
applications to neuroscience & cognitive science
3;5;5;5
4;4;4;3
2;2;2;2
2;2;2;2
3;2;2;2
4.5
3.75
2
2
2.25
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Was the linear probing of CINP performed on concatenated MRI and FCN embeddings, or was it based on one of the modalities?
\n2. Regarding network prompting, what is meant by partitioning all samples into 5/10 subsets? What is the purpose of this partitioning? Additionally, could the authors clarify why they chose to use 10% or 50% of the data? Is this to assess retrieval performance in a low-data regime? It would also be helpful to explain why the results are better when using 10% of the training data compared to using 100%.Also, the number 29.33% seems inconsistent with the figures in Table 3." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The authors effectively describe the motivation behind this study and highlight the potential contributions of integrating contrastive pretraining with MRI and FCN data. The approach of cross-modal contrastive learning is well-founded and offers a solid framework for using information from both modalities to enhance diagnostic accuracy while also enabling retrieval/diagnosis when one of the modalities is missing.
\n2. The three objective functions are well-described, and the authors provide detailed ablation studies, which contribute to a better understanding of their impact on model performance.
\n3. The clear delineation between pretraining and evaluation sets facilitates relatively fair comparisons across out-of-domain datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors have proposed an interesting contrastive pretraining framework that utilizes 3D T1 MRI and functional connectivity networks (FCNs) derived from fMRI data to learn robust representations for mental disorder diagnosis. The model has been evaluated in both linear probing and retrieval settings, showcasing intriguing ideas. However, the organization of the paper and the presentation of results could be enhanced." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The writing could be improved to enhance the presentation of results. For instance, the experimental setup for network prompting is somewhat challenging to follow, as detailed in my questions below.
\n2. The authors hypothesize that the “CINP” model has potential for improvement through fine-tuning; it would be better to directly include corresponding results in the tables for a more comprehensive understanding.
\n3. The comparisons are primarily between CINP and single-modality models (sMRI or FCN). There is a lack of discussion and direct comparisons with existing multi-modal methods for mental health diagnosis, both in linear probing and fine-tuning contexts. At least, some consensus on FCN and SSP-based model predictions would allow for fairer comparisons.
\n4. Although several metrics are presented, the authors did not discuss in detail the differences, especially when two metrics offer contrasting results.
\n5. The authors did show the advantages of pretraining over simply fine-tuning a model directly on the evaluation dataset that utilizes both modalities as input. This is an important aspect to demonstrate the value of pretraining. Further improvements could include testing in a low-data regime to see if pretraining can reduce data requirements for subsequent fine-tuning.
\n6. While improvements are shown, the absolute values of metrics appear low for potential clinical applications. Providing context on results from the literature for the same task or similar datasets would help readers unfamiliar with this specific field better interpret the model's performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Please see weaknesses above.\n- How were the confidence intervals computed?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The approach appears original in the way it allows the diagnosis of psychiatric disorders based on the patient's T1w MRI only. At the inference stage, two components are required: i) the T1w MR image of the patient to be diagnose and ii) fMRI data of patients with known diagnosis arranged in different diagnostic classes. The predicted diagnosis corresponds to one for which the similarity between the T1 and fMRI embedding was the highest.\n- This particular approach seems novel." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a contrastive learning approach to learn from both fMRI and anatomical T1w MRI with the aim to differentiate various psychiatric disorders." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper is not always easy to follow, in particular the Quantitative results section. Careful proofreading would help.\n- It is not entirely clear how the ABIDE, ADHD and SRPBS data sets were split and used during the evaluation: i) were the splits stratified, and if so on what criteria, ii) were the results of the ablation study displayed in Table 4 obtained on the test set (and so was hyper-parameter selection based on the test set)?\n- The diagnostic classes are not balanced and no metric adapted to this scenario is used to assess the performance.\n- The references of the first paragraph of the introduction mostly do not seem appropriate:\n - 'Over recent years, there has been growing evidence that mental disorders arise from dysfunction of interconnected patterns of regions-of-interest (ROIs) in the whole brain (Krishna et al., 2023) […].' This paper is about glioblastoma, it has nothing to do with fMRI nor mental disorders.\n - '[…] fMRI-derived functional connectivity network (FCN) […] has received considerable attention in diagnosis of mental disorders (Yang et al., 2021; Bastos & Schoffelen, 2016) […].' The first paper is about diffusion MRI and the second one describes functional connectivity analysis in general and is not focused at all on mental disorders.\n- The organisation of the paper does not seem optimal. I do not see the point of having a Related Work section at the end of the paper knowing that the methods are only briefly describes and no conclusion is being drawn.\n- Several works with similar aims should be cited and discussed, e.g.\n - He, Zhibin, et al. \"F2TNet: FMRI to T1w MRI Knowledge Transfer Network for Brain Multi-phenotype Prediction.\" International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer Nature Switzerland, 2024.\n - Fedorov, Alex, et al. \"Self-supervised multimodal learning for group inferences from MRI data: Discovering disorder-relevant brain regions and multimodal links.\" NeuroImage 285 (2024): 120485.\n - Li, Tongtong, et al. \"Automated diagnosis of major depressive disorder with multi-modal MRIs based on contrastive learning: a few-shot study.\" IEEE Transactions on Neural Systems and Rehabilitation Engineering (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My main concern is with the rationale behind using contrastive loss in CINP to model sMRI and fMRI representations. Based on my knowledge, there is a clear distinction between neuroimaging and image-text domains. In the image-text domain, it makes sense to maximize the differences between different categories while minimizing the differences within the same category. For instance, aligning a cat’s textual description with its image, while creating a contrast with dog-related text and images, is logical. However, in neuroimaging studies, why should we align sMRI and fMRI feature representations in this way? And why should we increase inter-subject differences?\n\nI also note that a healthy cohort was used for CINP’s pre-training. The differences within a healthy population do not fall into distinct categorical boundaries, so it is unclear what meaningful patterns are learned by maximizing inter-subject representation differences. Brain imaging features of healthy individuals are generally quite similar in terms of overall structure and function. By maximizing these differences within healthy individuals, the model may pick up biologically insignificant or irrelevant details, diverting its focus from core features. Such differences are more likely to be noise than meaningful information. In contrastive learning, distinct category boundaries are typically required to generate positive and negative samples, but healthy individuals do not naturally fall into clear categories, making the construction of inter-subject contrasts somewhat artificial. \n\nAdditionally, enforcing a “strict alignment” between sMRI and fMRI features may lack generalizability. For mental disorders, patients do not always exhibit abnormalities in both sMRI and fMRI simultaneously. For example, a patient might have a normal gray matter thickness in the prefrontal cortex as shown by sMRI, yet fMRI may reveal weakened functional connectivity between the prefrontal cortex and other regions (such as the parietal lobe or hippocampus). In such cases, a “strict alignment“ seems less appropriate. Similarly, even among healthy individuals, while there may be some correlation between gray matter thickness and functional connectivity, it is not necessarily consistent. Forcing alignment within a healthy cohort may lead the model to mistakenly assume a strong correlation between structural and functional features, thereby overlooking the natural variability and dynamic nature of functional connectivity. \n\nI look forward to seeing the authors' response and would consider raising my score if they can adequately address my concerns." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper addresses a significant problem in mental disorder diagnosis by leveraging both structural and functional MRI data, a topic relevant and valuable to the ICLR communities.\n2. The figures and descriptions in the paper are well-organized and clear, which aids in understanding the proposed approach and the CINP framework.\n3. The writing in the paper is clear, making the methodology and results easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the problem of mental disorder diagnosis using multimodal MRI data. Specifically, it proposes a framework called CINP (Contrastive Image-Network Pre-training), which applies contrastive learning between 3D T1-weighted (T1w) MRI and functional connectivity networks (FCNs) derived from fMRI. CINP aims to create a joint latent space integrating functional and structural information, enhancing diagnostic capabilities. During pre-training, the framework incorporates masked image modeling and network-image matching losses to improve modality alignment and representation quality. Moreover, the CINP contains a network prompting, enables the use of 3D T1w MRI from suspected patients and FCNs from confirmed cases to differentiate mental disorders. Extensive results on public datasets shows CINP has good performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper's technical contributions appear limited, as it mainly combines existing methods like contrastive learning and masked autoencoders (MAE) without significant innovation in methodology. The CINP framework may be seen as a straightforward integration of known techniques rather than a novel approach.\n2. There is still room for performance improvement. The performance of CINP on the ABIDE dataset is noticeably lower than that of the baselines, as indicated in Table 2. This suggests that the framework may not be fully optimized or may have limitations, and further improvements are needed to make it competitive across all datasets.\n3. The assumption that sMRI and fMRI features can be effectively aligned using contrastive learning lacks theoretical or empirical support from a neuroimaging or neuroscience perspective. This forced alignment may overlook important modality-specific differences, making the approach less effective for capturing unique structural-functional relationships in brain data. (See my questions below.)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1) Present findings related to diseases and provide analysis.\n2)Compared with the multi-model fusion methods." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1) A contrastive image-network pre-training method is proposed to use the multi-modal data for mental disorder diagnosis.\n2) The performance shows that the proposed method achieves better performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed a contrastive image network pre-training method for mental disorder diagnosis. By using contrastive learning between the structure MRI and functional MRI, the proposed can use the useful information from both sMRI and fMRI for mental disorder diagnosis. The proposed method has been compared with several competing methods. The results show that the proposed method achieves better performance than the competing methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) The significant advantage of brain networks lies in their interpretability, yet the paper lacks an interpretability analysis related to diseases.\n2)Many multimodal methods based on functional and structural MRI have been proposed, but this paper does not compare with these methods.\n3) The details of the methods are not clear enough." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning 3D Medical Image Models From Brain Functional Connectivity Network Supervision For Mental Disorder Diagnosis},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v5bK7cQch3},\nnote={under review}\n}" }, "abstract": { "value": "For mental disorder diagnosis, most previous works are task-specific and focus primarily on functional connectivity network (FCN) derived from functional MRI (fMRI) data. However, the high cost of fMRI acquisition limits its practicality in real-world clinical settings. Meanwhile, the more easily obtainable 3D T1-weighted (T1w) MRI, which captures brain anatomy, is ofen overlooked in standard diagnostic processes of mental disorders.\nTo address these two issues, we propose CINP (Contrastive Image-Network Pre-training), a framework that employs contrastive learning between 3D T1w MRI and FCNs. CINP aims to learn a joint latent semantic space that integrates complementary information from both functional and structural perspective. During pre-training, we incorporate masked image modeling loss and network-image matching loss to enhance visual representation learning and modality alignment.\nFurthermore, thanks to contrastive pre-training which facilitates knowledge transfer from FCN to T1w MRI, we introduce network prompting. This protocol leverages 3D T1w MRI from suspected patients and FCNs from confirmed patients for differential diagnosis of mental disorders. \nExtensive experiments across three mental disorder diagnosis tasks demonstrate the competitive performance of CINP, using both linear probing and network prompting, compared with FCN-based methods and self-supervised pre-training methods.\nThese results highlight the potential of CINP to enhance diagnostic processes with the aid of 3D T1w MRI in real-world clinical scenario." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "3D medical image", "functional connectivity network", "contrastive learning", "mental disease diagnosis" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/983ae776d2e1fe0addea1af14f86675d2246fc77.pdf" }, "presentation": null, "primary_area": { "value": "applications to neuroscience & cognitive science" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Learning 3D Medical Image Models From Brain Functional Connectivity Network Supervision For Mental Disorder Diagnosis" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v6NNopExN4
POST: A Framework for Privacy of Soft-prompt Transfer
main
Active
prompt transfer;soft prompt;privacy;distillation;confidentiality
alignment, fairness, safety, privacy, and societal considerations
3;5;6;6
4;4;3;4
2;2;3;2
2;3;3;2
3;2;3;3
5
3.75
2.25
2.5
2.75
-0.471405
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I appreciate the authors discussing the runtime of POST v.s. Full PT in Table 3, but I have some questions regarding the costs of the distillation step. While the distillation costs of a large LLM can be amortized as we finetune more soft prompts with the distilled LLM, could the authors quantitatively compare the computational overhead of finetuning one soft prompt on the large LLM to that of distilling the large LLM and finetuning one soft prompt on the distilled LLM?\n2. The authors use a fixed set of distilled models throughout the paper. How does the quality of the distilled model influence the transferability?\n3. Could other model compression techniques like pruning and quantization be used in this framework instead of model distillation?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The setting of the paper is well-motivated. The introduction sections and the figures clearly explain the setting and the proposed methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Prompt tuning, the process of optimizing the prefix tokens (hard) or their embeddings (soft) to improve the utility of a downstream task has become one important way of adapting Large Language Models (LLMs). Traditional soft prompt tuning paradigms require backpropagation through the full model, which is computationally expensive to perform on-premise, or handing the private data to a central LLM provider, which could lead to privacy concerns. To reduce the computational overhead of local soft prompt tuning while preserving privacy, the authors propose Privacy Of Soft-prompt Transfer (POST), which performs soft prompt tuning on a small local LLM distilled from a large LLM and transfers the soft prompt to the large model with th help of public data. In addition, the proposed framework can also be coupled with differentially private prompt tuning techniques to achieve privacy guarantees. To validate their proposed method, the authors evaluate the transferability of soft prompts finetuned on small models distilled from Roberta-base, GPT2-XL, and Llama2-7b for classification tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the paper presents an appealing middle ground by performing prompt finetuning on a distilled proxy model, the evaluations are performed with a fixed set of proxy models. As briefly mentioned by the authors, in the case of the centralized LLM provider, the utility of the distilled model should ideally be not strong enough to replace the original model but is still good enough to serve as a proxy for prompt finetuning. However, the authors do not further explore the trade-off between the quality of the proxy model and the effectiveness of the proposed pipeline.\n2. The evaluation of the proposed method mainly focuses on classification tasks, which have limited practicality given that modern autoregressive LLMs can perform open-ended generation tasks. While I recognize that prior works also mainly focus on classification tasks, I encourage the authors to discuss the applicability of the proposed method on open-ended tasks, or even better, demonstrate such ability through follow-up experiments." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. I wonder whether the framework needs the original input during evaluation. If so, the model input may contains privacy information.\n2. For the setting, could you please give further explanation about whether it is white or black box?\n3. Could you please provide more evidence (citations, or experiments) to confirm the claim“soft prompts are highly coupled to the LLM they were tuned on, making them difficult to transfer”?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The author proposed a novel framework for transferring soft prompts between SLM and LLM, and achieved completive results on the test dataset\n2. The paper brought focus on the privacy and efficiency issue on prompt tuning, and tried to reduce the privacy leakage and computational cost." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a framework for Privacy Soft-Prompt transfer to reduce the computation cost and potential privacy leakage. The framework mainly contains three steps as deriving SLM using knowledge distillation, locally prompt tuning, and prompt transferring using public dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper writing needs to be improved. For example, the author does not clearly represent whether POST needs the original input x during the inference. If so, this may lead to privacy concerns. I mean, in tasks like classification, the LM input contains privacy information.\n2. The paper needs to be clear about the setting. For black-box LMs, they cannot provide the service as giving a SLM through KD. If the author means for white-box LMs, they author should further explain why there is privacy concerns.\n3. For the experiment, I think other tasks in spite of classification help improve the credibility. For tasks as classification, the LM input contains privacy information which violates the setting in this paper. For other tasks as Q&A, the soft prompt may contains privacy information and the model inputs may have less privacy concerns, which is consistent with the setting in this paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tNo ablation study is found for the selection of different value of $\\alpha$ in Eq. (3).\n2.\tHow to select a public dataset used for prompt transfer? From Table 1, it seems like sometimes selecting a different public dataset affects the final test accuracy a lot.\n3.\tOther than next word prediction task, does the proposed method perform well on other language task? For example, question answering and reading comprehension?\n4.\tWhy there is no result for llama2-7b in Table 2?\n5.\tWhen comparing to the status of the art baselines, it seems like larger models have better zero-shot performance. In this case, in Table 4, can you include more results on larger models like llama2?\n6.\tFrom Fig. 5, it seems like the attack success rate is low even without DP. In this case, would that be the MIA algorithm used in the paper is not good enough? It seems like there is no motivation to add DP to protect private data. \n7.\tIf the original LLM is mixture of expert model, does the proposed framework apply to that case?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper studies a very interesting problem and proposed a novel framework to allow users to “perform soft prompt tuning” without revealing their private data and LLM providers not revealing their protected LLMs. The most novel part comes from privacy-preserving prompt transfer through public data, which largely protect data privacy and maintain inference performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors proposed a new framework that allows the users to perform private soft prompt tuning without accessing the original protected LLMs without leaking their private data. They proposed to first obtained a distilled model from the original LLM, a step needed from the LLM provider, and then the users will utilize this distilled model to perform soft prompt tuning and transfer the resulting soft prompt to prompt the original LLM. They conducted a few experiments to demonstrate the performance of their proposed framework and compare it with several baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I listed a few weaknesses below:\n1. It is unclear how to select public dataset for prompt transfer, there is no guidance on how to select such a public dataset and how would the selection of public dataset affect the performance. From their experimental results in Table 1, it seems like different selection would make a large enough impact on the inference accuracy.\n2. No explanation on what would be the termination criterion for knowledge distillation step. The authors described that the distilled model should behave closely to the original model but not meeting the actual inference performance of the original model, which is too vague. I do not see either the ablation studies on how to select $\\alpha_{ce}$, $\\alpha_{lm}$ and $\\alpha_{cos}$.\n\nOverall, the main concern is that the proposed method contains many steps involving hyperparameters and public datasets need to be tuned and selected. More ablation studies and further theoretical analysis are needed to understand the general practical performance of the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I don't understand the statement in this paper ``KD has been shown effective to compress LLMs during the pre-training phase while maintaining\ntheir performance (Sanh et al., 2019)\". The cited paper discusses BERT, which is not typically recognized as an LLM. Could you clarify this point? Additionally, can all types of LLMs (e.g., GPT, Llama, Mistral) be effectively compressed using knowledge distillation?\n\n\n\n2. This paper considers two baselines, namely DP-OPT (Hong et al., 2023) and Zero-Shot Transfer (Wu et al., 2023b). Hong et al. (2023) use the TREC, MPQA, and Disaster datasets, and Wu et al. (2023b) consider the LAMA dataset. It is unclear why these datasets are not included in the experiments of this paper. Could the authors also include these datasets in the experiments?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The related previous work and key concepts (e.g., prompt tuning, differential privacy, knowledge distillation, prompt transfer) are clearly explained. The proposed algorithm is described in detail." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the problem of differentially private soft prompt tranfer. They propose the POST framework which consists of 3 steps: 1) The LLM provider compresses their model into a smaller model using techniques in knowledge distillation and then sends the distilled model to the user. 2) The user performs private prompt tuning using PromptDPSGD (Duan et al., 2024) on the distilled model. 3) The user transfers this prompt on the large LLM." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1, Motivation Unclear: This paper focuses on privacy-preserving soft prompt transfer, where the LLM provider is untrusted. Could you provide examples of closed-source LLM providers that support soft prompts? If, instead, the LLM provider is an open-source model that can be run locally (e.g., GPT-2 XL or Llama-2-7b, as mentioned in this paper), privacy concerns could potentially be mitigated by running the model locally. Could you clarify the motivation behind the need for privacy-preserving transfer in this context?\n\n\n\n\n2, Novelty Concerns: Given the existing work for knowledge distillation (Sanh et al., 2019) and PromptDPSGD (Duan et al., 2024), what is the technical novelty in this paper?\n\n\n\n\n3, Practical Concern: The proposed approach involves the LLM provider performing knowledge distillation to compress their LLM into a smaller model and then sending this distilled model to the user. Is this practical? It’s unclear if an LLM provider would be willing to share a distilled model, given proprietary concerns, and risks of potential attacks. \n\n4, Missing DP Guarantee: I recommend including a formal theorem that states the DP guarantee of the proposed method, and a proof to ensure the privacy guarantee is rigorously supported.\n\n\n\n5, Missing Experimental Details: The value of $\\delta$ used in this paper should be mentioned in the main paper. What is the standard deviation of the results in Table 1, 2 and 4? \n\n\n6, Some Typos:\n\n(1) Line 117: fist $\\rightarrow$ first.\n\n(2) Line 119: There’s an extra comma.\n\n\n\n(3) Line 132: from student to teacher $\\rightarrow$ from teacher to student.\n\n\n(4) Line 256: $\\mathcal{N}(0,\\sigma^2 c^2)\\rightarrow \\mathcal{N}(0,\\sigma^2 c^2 \\mathbf{I})$" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a method on how to transfer soft prompts tuned on a distilled small model to a larger model using public data." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024post,\ntitle={{POST}: A Framework for Privacy of Soft-prompt Transfer},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v6NNopExN4},\nnote={under review}\n}" }, "abstract": { "value": "Prompting has emerged as a dominant learning paradigm for adapting large language models (LLMs). While discrete (textual) prompts prepend tokens to the input for optimized outputs, soft (parameter) prompts are tuned in the embedding space via backpropagation, requiring less engineering effort. However, unlike semantically meaningful discrete prompts, soft prompts are tightly coupled to the LLM they were tuned on, hindering their generalization to other LLMs. This limitation is particularly problematic when efficiency and privacy are concerns, since (1) it requires tuning new prompts for each LLM which, due to the backpropagation, becomes increasingly computationally expensive as LLMs grow in size, and (2) when the LLM is centrally hosted, it requires sharing private data for soft prompt tuning with the LLM provider. To address these concerns, we propose a framework for Privacy Of Soft-prompt Transfer (POST), a novel method that enables private soft prompt tuning on a small language model and then transfers the prompt to the large LLM. Using knowledge distillation, we first derive the small language model directly from the LLM to facilitate prompt transferability. Then, we tune the soft prompt locally, if required with privacy guarantees, e.g., according to differential privacy. Finally, we use a small set of public data to transfer the prompt from the small model to the large LLM without additional privacy leakage. Our experimental results demonstrate that our method effectively transfers soft prompts, protecting local data privacy and reducing the computational complexity over soft prompt tuning on the large model." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "prompt transfer", "soft prompt", "privacy", "distillation", "confidentiality" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a0725135bbb852ab3f20b3caf6f16d76ebff91b9.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/357986e7541e58036b5abfb09e9a252bd8d0b114.zip" }, "title": { "value": "POST: A Framework for Privacy of Soft-prompt Transfer" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v6iLQBoIJw
Does SGD really happen in tiny subspaces?
main
Active
optimization for deep networks;training dynamics;SGD;Hessian;low-rank subspace
optimization
3;3;6;8
5;4;4;4
2;2;3;3
1;2;2;3
2;3;4;3
5
4.25
2.5
2
3
-0.544331
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can the authors calculate the full gradient when \"spurious alignment\" happens? Does that also align with the Hessian's top eigenspace? Or it has a very small similarity?\n2. Can the authors experiment with some kind of Gaussian noise for the toy example? And can you empirically show why they believe the stochastic noise has a larger bias in the top-eigenspace?\n3. For Appendix D.1, can the authors (1) report the learning rates (2) train with longer time s.t. EoS happens (3) add experiments with MSE loss?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper systematically investigates the phenomenon of gradient-Hessian alignment in various optimization algorithms, including SGD, GD in the Edge of Stability (EoS) regime, and the Sharpness-Aware Minimization (SAM) algorithm, with a particular emphasis on the analysis of SGD. Previous studies have primarily focused on understanding full-batch algorithms like GD or Adaptive GD without stochasticity. \n\nMoreover, this work takes an initial step toward understanding the 'spurious' alignment where the actual useful component of the gradient is the non-dominant part. They also claim that batch noise is the cause of this alignment by introducing a toy model to provide theoretical intuition. Though it is not very precise to say the mechanism is totally different between GD and SGD (see weakness), I believe the **\"ill-conditioned-valley\" view of loss landscape** captures why dom-GD doesn't decrease the loss but bulk-GD does." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates whether neural networks can be effectively trained by focusing only on the \"dominant subspace\", which is the top eigenspace of the Hessian of the loss, where the gradient seems to align with. The authors projects SGD updates onto this dominant subspace and find that cannot reduce the training loss further. In contrast, removing the dominant subspace from the updates proves equally effective for training, even though it projects out most of the original gradient component. The authors suggest that the alignment between the gradient and the dominant hessian subspace is \"spurious\". This \"spurious\" alignment appears consistently across various practical settings, such as the Edge of Stability, Sharpness-Aware Minimization, and when using momentum or adaptive optimizers. The authors also discuss the possible main causes of the phenomenon, and propose some toy models to understand it." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "First, the so-called 'spurious alignment' is long observed in EoS literature in my opinion. For example, Damian et al. [2] showed that in the EoS regime, (1) the gradient alignment happens (2) the loss decrement in the EoS regime depends on the constrained trajectory **by projecting out the top eigenspace**. It is exactly the finding listed in section 5.1 of this paper that bulk-GD is as effective as GD. I believe the authors should discuss those related works.\n\nThe author may argue that \"The mechanisms behind gradient alignment differ between GD with large learning rates and SGD with small learning rates.\" so their findings are novel. But that is also my primary concern of the paper's conclusion \"Stochastic noise is the cause of spurious alignment\". I agree that \"stochastic noise is part of the cause of spurious alignment\", but I don't think the alignment mechanism is intrinsically different between GD and SGD.\n\nIn section 4. The authors define the \"small LR regime\" and \"large LR regime\" of SGD and claim they are different by setting the threshold of $2/\\lambda_1$ of the learning rate. This threshold is for the GD descent lemma to hold. Then the authors claim in the small LR regime, the cause of the spurious alignment is the gradient noise. They 'verify' their findings by switching SGD to GD, and observe that the alignment disappears. However, [1] found that SGD begins to oscillate within the top eigenspace smaller than $2/\\lambda_1$. An intuitive explanation for this is **injected gradient noise increases the second order term of the Taylor expansion**, making the loss begin to oscillate **when the learning rate is below $2/\\lambda_1$** as the descent lemma predicted. In this case, GD is still stable and descent lemma holds, quickly converging back to the \"bottom of the valley\". That is why the alignment disappears after switching to GD, since after it's in the \"bottom of the valley\" the gradient component in the top eigenspace will be very small. \n\nTherefore, my argument is that stochastic noise indeed reduces the threshold of learning rate for oscillation and makes it easier to have spurious alignment. But the mechanism is the same: the self-stabilization effect induced by the gradient(+noise) within the top Hessian subspace. **To test this argument, the author should also calculate the full gradient when \"spurious alignment\" happens and see if the full gradient aligns with the top eigenspace. Also, the experiments should include various batch sizes to test different levels of gradient noise.** If my conjecture holds, I think this work can be seen as a generalized empirical validation of self-stabilization [2] for SGD, which somehow limits the novelty of this paper. I might also be wrong, so if the conjecture is not correct, I will raise my score.\n\nThe author also did some small learning rate GD in Appendix D.1 to corroborate their argument. I wonder what is the learning rate of the \"small learning rate\"? In the original EoS paper, most of the figures will exhibit EoS due to progressive sharpening when the model is trained for a long time. Also, the authors use the cross-entropy loss, where EoS may not happen due to some margin-maximization effect. What if the authors switch to square loss? I believe when you enter the EoS regime (when trained with enough time), the gradient alignment phenomenon will also appear.\n\nThe authors' toy examples constructed the two loss functions to introduce the gradient noise. However, I don't think this reflects the real-world gradient noise, since the 100xy term is artificially added to introduce gradient components in $x$-direction unless $y = 0$. But I think injecting some Gaussian noise biasing toward $x$-direction may also work for the toy case. It would be great to have some empirical evidence as to why the authors believe the gradient noise has a larger component in the top eigenspace.\n\n[1] Lee, Sungyoon, and Cheongjae Jang. \"A new characterization of the edge of stability based on a sharpness measure aware of batch gradient distribution.\" The Eleventh International Conference on Learning Representations. 2023.\n\n[2] Damian, Alex, Eshaan Nichani, and Jason D. Lee. \"Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability.\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The experiments focus on the optimization comparsion between SGD, Bulk-SGD, and Dom-SGD. \nWhat are the generalization differences among these methods, particularly in the experiments conducted on MNIST and CIFAR, as shown in Figure 1?\n\n- Could the authors provide sufficient theoretical explanation for the main finding?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Many recent papers (Blanc et al., 2022; Li et al., 2022) highlight that the dynamics in bulk subspace (i.e., along flat directions) are crucial for SGD/SAM to move to flat minima, thereby improving generalization.\nIn contrast, this paper emphasizes the significant role of the dynamics in bulk subspace (i.e., along flat directions) in relation to optimization." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates that neural nets cannot be trained within the dominant subspace (i.e., along sharp directions)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My primary concern are (i) the paper does not sufficiently explain the main finding, i.e., why only the dynamics in the bulk subspace are crucial for optimization, and (ii) the novelty of many contexts:\n\n- Section 4. This section focus on the alignment between the stochastic gradient and the sharp directions. However,\n\n - This section fails to adequately explain the main finding: why only the dynamics in the bulk subspace are crucial for optimization, expect for a very toy model.\n\n - Even regarding the alignment between stochatic gradient and the loss landscape, there are numerous previous works that address this point: $\\mathbb{E}[g_tg_t^T]\\approx 2 L(\\theta)\\nabla^2 L(\\theta_t)$ (Wojtowytsch, 2021; Mori et al., 2022; Wu et al., 2022; Wang and Wu., 2023), implying that stochastic gradient concentrate more along sharp directions of the landscape (i.e., the bump directions). It appears that the authors have overlooked these works.\n\n- Section 5. This section demonstrates that the gradient of GD aligns with the sharp directions in Edge of Stability (EoS), However, \n\n - The authors do not explain the main finding: why only the dynamics in the bulk subspace are crucial for optimization.\n\n - Regarding the alignment between the gradient and sharp directions, this has beed well studied in (Arora et al., 2022; Lyu et al; 2022; Damian et al., 2023) that this occurs due to approximate power iterations). However, the authors do not provide new insights." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Why did the authors limit their experiments to a small subset of the datasets and can the authors share any insights for training with the full dataset?\n\nCould the authors provide any test accuracy results for Dom-SGD and Bulk-SGD to offer some insight into their impact on model generalization? When Bulk-SGD performs similarly to SGD in terms of training loss, does this trend hold for test accuracy as well?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper addresses a question of significant interest in the ML community regarding SGD's effectiveness in low-dimensional subspaces. It helps address potential misconceptions arising from the well-cited work of Gur-Ari et al. (2018), which suggests that gradient descent would occur primarily within a tiny subspace.\n\nA convincing quadratic toy model effectively reinforces the authors’ interpretation of the empirical results, lending credibility to their main conclusions.\n\nThe paper is well-structured, with a logical flow that makes the argument easy to follow and the findings readily accessible to readers.\n\nGur-Ari et al. (2018): https://arxiv.org/abs/1812.04754" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper challenges existing assumptions about stochastic gradient descent (SGD) learning within a low-dimensional dominant subspace defined by the top eigenvectors of the Hessian of the loss. The authors find empirically that while gradients appear to align with this dominant subspace during training, projecting updates solely into this subspace (Dom-SGD) halts learning. Instead, projecting updates onto the orthogonal \"bulk\" subspace (Bulk-SGD) is as effective as standard SGD, suggesting that meaningful learning occurs outside the dominant subspace." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although the paper includes experiments on three datasets, training is restricted to small subsets (e.g., 5,000 of 50,000 samples of CIFAR10) and primarily uses mean squared error loss instead of cross-entropy, despite focusing on classification tasks. This may limit the generalizability of the findings and should be communicated more clearly.\n\nThe paper exclusively examines the effects on training loss, with no analysis of test accuracy under Dom-SGD and Bulk-SGD. Including at least one plot of test performance would offer valuable insights into the practical implications for readers." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Instead of computing the subspace based on the Hessian of the full loss, what if the subspace were computed based on the Hessian of the mini-batch loss. Would alignment still hold? Do the findings in the paper still hold?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well written with a clear structure and compelling experimental results. The design of the experiments is sophisticated, covering both Dom-SGD and Bulk-SGD setups, along with more nuanced experiments like switching between SGD and GD during training. These additional experiments strongly support the paper’s claims, adding robustness to the argument. Additionally, the observations found in this paper is interesting and should contribute to a better understanding of the training dynamics of deep neural networks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper is built on recent studies that claim the gradients during neural network training live in a small subspace spanned by the top few eigenvectors of the Hessian. The authors decompose the gradients into the dominant component that lives in the dominant space and the bulk component that is orthogonal to the subspace. They find that dominant component of the gradient does not contribute to decreasing the training loss, while the bulk component, though only accounts for a small portion of the total gradient magnitude, is the main source driving the reduction in the loss. Through different experiments, the authors demonstrate that this observation holds for SGD, GD on the edge of stability, SAM, and adaptive optimizers. The authors also discuss potential explanations from a loss landscape perspective." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Major point:\n\n1.\tGeneralization: The authors acknowledge in the limitation that this paper does not consider the generalization part. However, in modern deep learning, generalization error is usually more important than training loss. And it is widely believed that SGD has an implicit bias that benefits generalization (arxiv: 1611.03530, 2006.08680, 2306.04251, etc.). The absence of results or discussion on generalization weakens the paper. Including an analysis of generalization effects could significantly enhance the paper's impact.\n\n2.\tBulk-SGD and noise: The results suggest that Bulk-SGD is less noisy than SGD (e. g. in Fig. 1) and Bulk-GD is less noisy than GD (Fig. 9). This reduction in noise may explain the speedup observed in Fig. 9 and 10. On the other hand, it is unclear how this will affect generalization. The authors have not adequately addressed this part in the paper, and I suggest including a more thorough discussion around that.\n\n3.\tDom-SGD and loss increase: It appears that with cross-entropy loss or using a standard architecture (VGG-11) (Fig. 14 and 17), the loss increases if training with Dom-SGD. This is a surprising finding and I think the authors should discuss this phenomenon more explicitly in the main text. Additionally, the absence of training curves for Bulk-SGD in these figures raises questions. Is there a reason for not including the curves for Bulk-SGD? I am also wondering whether this increase in the training loss relates to findings in prior works: arxiv: 2107.11774, 2306.04251.\n\n\nMinor point:\n\n1.\tFor results shown in table 1, the authors calculate the effective learning rates over the first 1000 steps. However, the authors also show that the alignment usually happens after some warm-up steps (e. g. Fig. 3). I think it makes more sense to plot the effective learning rates as a function over time (if noise is a concern, then perhaps run some kind of EMA). \n\n2.\tAlthough the toy quadratic model explains the observations well, the source of randomness seems a little contrived. I suggest incorporating the discussion of another model at least in the appendix, where the noise is Gaussian noise directly added on to the gradients, for a more natural comparison." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Deep neural networks cannot be trained within the dominant subspace, even though gradients align with this subspace along the training trajectory." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024does,\ntitle={Does {SGD} really happen in tiny subspaces?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v6iLQBoIJw},\nnote={under review}\n}" }, "abstract": { "value": "Understanding the training dynamics of deep neural networks is challenging due to their high-dimensional nature and intricate loss landscapes. Recent studies have revealed that, along the training trajectory, the gradient approximately aligns with a low-rank top eigenspace of the training loss Hessian, referred to as the dominant subspace. Given this alignment, this paper explores whether neural networks can be trained within the dominant subspace, which, if feasible, could lead to more efficient training methods. Our primary observation is that when the SGD update is projected onto the dominant subspace, the training loss does not decrease further. This suggests that the observed alignment between the gradient and the dominant subspace is spurious. Surprisingly, projecting out the dominant subspace proves to be just as effective as the original update, despite removing the majority of the original update component. We observe similar behavior across practical setups, including the large learning rate regime (also known as Edge of Stability), Sharpness-Aware Minimization, momentum, and adaptive optimizers. We discuss the main causes and implications of this spurious alignment, shedding light on the dynamics of neural network training." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "optimization for deep networks", "training dynamics", "SGD", "Hessian", "low-rank subspace" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d6cf17930e00818bdbba62bd201f665a03858871.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/eaad4768ba931c00559a5c290bd97ba1a983dba7.zip" }, "title": { "value": "Does SGD really happen in tiny subspaces?" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v71Nsh6R7m
StructMoE: Augmenting MoEs with Hierarchically Routed Low Rank Experts
main
Withdraw
moe;mixture of experts;LLM;transformer
foundation or frontier models, including LLMs
Zain Sarwar;Ashwinee Panda;Benjamin Thérien;Stephen Rawls;Sambit Sahu;Supriyo Chakraborty
~Zain_Sarwar1;~Ashwinee_Panda1;~Benjamin_Thérien1;~Stephen_Rawls3;~Sambit_Sahu2;~Supriyo_Chakraborty1
0
0
0
0
0
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "I submitted the wrong manuscript for the final version." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": { "value": "A method to scale MoE models using structured modules" }, "_bibtex": { "value": "@misc{\nsarwar2024structmoe,\ntitle={StructMoE: Augmenting MoEs with Hierarchically Routed Low Rank Experts},\nauthor={Zain Sarwar and Ashwinee Panda and Benjamin Th{\\'e}rien and Stephen Rawls and Sambit Sahu and Supriyo Chakraborty},\nyear={2024},\nurl={https://openreview.net/forum?id=v71Nsh6R7m}\n}" }, "abstract": { "value": "The traditional approach to scaling Mixture of Experts for transformer models has been to increase the total number of experts. While performance improves with more experts, the gains are diminshing whereas\nmemory scales linearly with the number of experts. We introduce $\\textit{StructMoE}$, a scaling approach for Mixture of Experts which augments experts with additional dynamic capacity using routed structured matrices which we refer\nto as $\\textbf{L}$ow $\\textbf{R}$ank $\\textbf{E}$xprts ($\\textbf{$\\textit{LoRE}$}$). At a high-level, we introduce hierarchical MoEs where the first level of routing decides which expert each token\nshould be routed to and the second level of routing decides which $\\textit{LoRE}$ should each token be routed through. The outputs of the expert and the $\\textit{LoRE}$ are then entangled together to provide\nthe final output. This introduces more dynamism into the model which has empirically been demonstrated to improve model performance. We find this scaling approach to outperform a standard MoE baseline in terms of loss on a held out validation. Thus, we propose this to be an effective scaling technique for MoEs compared to the standard approach of adding more \nexperts to the model." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Zain_Sarwar1", "~Ashwinee_Panda1", "~Benjamin_Thérien1", "~Stephen_Rawls3", "~Sambit_Sahu2", "~Supriyo_Chakraborty1" ] }, "authors": { "value": [ "Zain Sarwar", "Ashwinee Panda", "Benjamin Thérien", "Stephen Rawls", "Sambit Sahu", "Supriyo Chakraborty" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "moe", "mixture of experts", "LLM", "transformer" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "sarwar|structmoe_augmenting_moes_with_hierarchically_routed_low_rank_experts" }, "pdf": { "value": "/pdf/12cb0452b8beb875e939d465e8c558127d8d0e97.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "StructMoE: Augmenting MoEs with Hierarchically Routed Low Rank Experts" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v7YrIjpkTF
Multimodal Quantitative Language for Generative Recommendation
main
Active
Recommendation System;Generative Recommendation
generative models
5;6;6;6
4;4;4;4
3;2;3;3
3;4;3;4
3;4;3;4
5.75
4
2.75
3.5
3.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. How do different training operations, such as collision handling and quantization, impact the computational time and resource requirements of the model?\n\n2. What methods or algorithms are employed to map these quantitative vectors back to specific items in the dataset?\n\n3. Why does the model not use multimodal information simultaneously to predict the next click item? What are the challenges or limitations that prevent this integration?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The approach to transform diverse item content from different domains and modalities into a unified quantitative language is highly innovative. This integration allows for a more robust and versatile recommendation system that can handle varied inputs effectively.\n\n2. The paper introduces a well-structured design of pre-training and fine-tuning tasks that includes not only generative prediction but also alignment tasks, enhancing the robustness of the method. This comprehensive approach allows the model to effectively leverage both generative capabilities and alignment strategies to improve overall recommendation accuracy.\n\n3. The proposed framework significantly outperforms existing models on several benchmark datasets, particularly in terms of the NDCG metric. This suggests that the method is not only theoretically sound but also practically effective." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper titled \"Multimodal Quantitative Language for Generative Recommendation\" introduces a novel approach to enhance generative recommendation systems by converting item content from multiple domains and modalities into a unified \"quantitative language\". This methodology seeks to bridge the gap between the generalized linguistic knowledge of pre-trained language models (PLMs) and the specialized needs of recommendation systems. The authors developed a new framework, MQL4GRec, which employs \"quantitative translators\" to convert textual and visual item data into a shared vocabulary. This shared language is then enriched with semantic information through various generation tasks to enable effective knowledge transfer from multimodal data to recommendation systems.\n\nThe paper's main contribution lies in its innovative method of integrating multimodal data to improve recommendation performance significantly, surpassing baseline methods by notable margins in terms of the NDCG metric across multiple datasets. Furthermore, the framework introduces a potential shift towards more universal recommendation systems that do not rely on traditional item IDs, thereby addressing common challenges like improving scalability and transferability across different domains." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper does not discuss the impact of various training operations on the algorithm's time complexity, such as the handling of collisions. This oversight might leave questions about the scalability and efficiency of the proposed method in practical applications.\n\n2. The methodology section lacks a clear explanation of how the generated quantitative vectors are used to retrieve corresponding items during the next item generation task. This omission could lead to ambiguity regarding the operational specifics of the model.\n\n3. The model does not utilize multimodal information simultaneously to predict the next click item, which limits the paper's innovativeness and potential for expansion. Leveraging such multimodal data more effectively could enhance the model's predictive performance and applicability in more complex scenarios." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- User ID Collision: With only 2000 tokens representing a large user base, there is a potential for ID collisions, which may lead to inaccuracies in recommendation results in real-world applications.\n - Domain Adaptability: The model performs poorly on certain domain-specific datasets, such as the Games dataset, suggesting limitations in domain transferability." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Innovation: The proposed MQL4GRec method translates content from different modalities into a unified “quantitative language,” enabling cross-domain and cross-modal recommendation knowledge transfer. This approach addresses limitations in handling multimodal data in existing generative recommendation models.\n\n\n2. Superior Performance: Experimental results on multiple public datasets demonstrate that MQL4GRec outperforms baseline methods on key metrics such as NDCG.\n\n\n\n3. Open-Source Availability: The paper provides a fully accessible code repository, facilitating reproducibility and further research within the community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "MQL4GRec introduces a novel approach for generative recommendation by transforming multimodal content from different domains into a unified \"quantitative language,\" facilitating cross-domain knowledge transfer in recommendation tasks. The method uses quantitative translators for text and image content, building a shared vocabulary to encode semantic information across modalities. A series of language generation tasks further enriches this vocabulary, enhancing the model's capacity to represent multi-faceted user preferences. Experimental results demonstrate notable performance improvements over baseline models on key metrics across multiple datasets, showcasing MQL4GRec's scalability and potential in multimodal recommendation.\n\n\n\nHowever, its innovation appears limited, closely resembling existing methods like GenRet and MMGRec in both tokenization approach and generative structure." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Similarity to Existing Tokenization Approaches\n- The unified vocabulary for multimodal information closely resembles the generative retrieval tokenization approach found in \"Learning to Tokenize for Generative Retrieval,\" [1] particularly the \"GenRet\" model, which uses discrete auto-encoding for compact identifiers. This \"multimodal codebook\" seems to be an adaptation of single-modality tokenization, relying on established techniques like RQ-VAE and offering only incremental improvements without substantial architectural or performance innovation.\n- The motivational structure and initial figures in MQL4GRec are closely aligned with those in GenRet, which may diminish the perceived originality of the proposed approach.\n\n\n\n2. Lack of Comparative Analysis\n- MQL4GRec’s generative approach appears heavily inspired by MMGRec [2], which also employs Graph RQ-VAE for multimodal representation through user-item interactions, raising concerns about the uniqueness of MQL4GRec’s contributions. \n- The paper does not clearly distinguish MQL4GRec's advancements over MMGRec, especially in terms of multimodal token-based representations. A more thorough comparison is needed to establish any unique contributions beyond MMGRec’s existing framework.\n\n\n[1] Sun, W., Yan, L., Chen, Z., Wang, S., Zhu, H., Ren, P., ... & Ren, Z. (2024). Learning to tokenize for generative retrieval. Advances in Neural Information Processing Systems, 36.\n\n[2 ] Liu, H., Wei, Y., Song, X., Guan, W., Li, Y. F., & Nie, L. (2024). MMGRec: Multimodal Generative Recommendation with Transformer Model. arXiv preprint arXiv:2404.16555.\n\nThe paper does not clearly distinguish MQL4GRec's advancements over MMGRec, especially in terms of multimodal token-based representations. A more thorough comparison is needed to establish any unique contributions beyond MMGRec’s existing framework." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Figure 3 shows that the performance on the Games dataset slightly declines as the amount of pre-training data increases. Does this suggest that the pre-training strategy has limitations when applied to domains with substantial cross-domain differences?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* High Innovation: This work is the first to propose using a unified quantitative language to address knowledge transfer in multimodal recommendation, which is highly valuable for enhancing the generalization ability of recommendation systems.\n\n* Thorough Experimental Validation: The paper conducts extensive experiments across three datasets, showcasing not only the overall performance of the method but also analyzing the role of individual components through ablation studies, indicating rigorous experimental design." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel multimodal generative recommendation method, MQL4GRec, which facilitates the effective transfer of recommendation knowledge by converting item content from different domains and modalities into a unified quantitative language. Specifically, MQL4GRec introduces a unified quantitative language representation to handle multimodal content, including text and images. Additionally, a series of quantitative language generation tasks are designed to enrich the semantic representation. Extensive experiments across three datasets show that this method significantly outperforms baseline approaches." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The model's high complexity poses significant computational and storage demands, which could lead to considerable costs in real-world deployment." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1.As an efficient framework, I am interested in understanding the training costs of MQL4GRec compared to other baselines. Specifically, how does it perform in terms of training time, VRAM usage, and inference time?\n\n2.In the field of recommender systems, there is a lack of widely recognized pre-trained models. Could MQL4GRec potentially serve as a foundation model for other downstream tasks?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The proposed concept of a multimodal quantitative language, together with the design of quantitative language generation tasks, represents a novel and innovative advancement in the field of generative recommendation\n\n2. The architecture design is elegant, presenting the idea in a straightforward way, yet experiments demonstrate its strong effectiveness. I personally appreciate this type of work and believe it can make a meaningful impact in the field of generative recommendation.\n\n3. The availability of the code significantly enhances reproducibility." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel approach, MQL4GRec, designed to convert item content from diverse domains and modalities into a unified quantitative language." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.The proposed framework is rooted in the generative recommendation paradigm and aligns with a preprint in a similar research direction [1]. However, it still represents a valuable contribution in my view, even if not groundbreaking.\n\n2.The authors should consider testing the statistical significance of MQL4GRec results.\n\n3.There remains a limitation in zero-shot capability, which is a known challenge in the field of recommendation.\n\n4.To enhance the comprehensiveness of the \"Multi-modal Recommendation\" section in the related work, the authors could consider including more recent state-of-the-art multimodal recommender system papers, such as [2,3]. The field of multimodal codebooks from other communities should also be included in the related work section to clarify the distinctions between the proposed MQL approach and existing methods, such as those in [4, 5]\n\n\n[1] Liu, Han, et al. \"MMGRec: Multimodal Generative Recommendation with Transformer Model.\" arXiv preprint arXiv:2404.16555 (2024).\n\n[2] Fu, Junchen, et al. \"IISAN: Efficiently adapting multimodal representation for sequential recommendation with decoupled PEFT.\" *Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval*. 2024.\n\n[3] Liu, Han, et al. \"MMGRec: Multimodal Generative Recommendation with Transformer Model.\" *arXiv preprint arXiv:2404.16555* (2024).\n\n[4] Lan, Zhibin, et al. \"Exploring better text image translation with multimodal codebook.\" arXiv preprint arXiv:2305.17415 (2023).\n\n[5] Duan, Jiali, et al. \"Multi-modal alignment using representation codebook.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024multimodal,\ntitle={Multimodal Quantitative Language for Generative Recommendation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v7YrIjpkTF},\nnote={under review}\n}" }, "abstract": { "value": "Generative recommendation has emerged as a promising paradigm aiming at directly generating the identifiers of the target candidates.\nMost existing methods attempt to leverage prior knowledge embedded in Pre-trained Language Models (PLMs) to improve the recommendation performance. However, they often fail to accommodate the differences between the general linguistic knowledge of PLMs and the specific needs of recommendation systems. Moreover, they rarely consider the complementary knowledge between the multimodal information of items, which represents the multi-faceted preferences of users. To facilitate efficient recommendation knowledge transfer, we propose a novel approach called Multimodal Quantitative Language for Generative Recommendation (MQL4GRec). Our key idea is to transform items from different domains and modalities into a unified language, which can serve as a bridge for transferring recommendation knowledge. Specifically, we first introduce quantitative translators to convert the text and image content of items from various domains into a new and concise language, known as quantitative language, with all items sharing the same vocabulary. Then, we design a series of quantitative language generation tasks to enrich quantitative language with semantic information and prior knowledge. Finally, we achieve the transfer of recommendation knowledge from different domains and modalities to the recommendation task through pre-training and fine-tuning. We evaluate the effectiveness of MQL4GRec through extensive experiments and comparisons with existing methods, achieving improvements over the baseline by 11.18\\%, 14.82\\%, and 7.95\\% on the NDCG metric across three different datasets, respectively. Our implementation is available at: \\href{https://anonymous.4open.science/r/QL4GRec-ED65/}{\\textcolor{blue}{https://anonymous.4open.science/r/MQL4GRec-ED65/}.}" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Recommendation System", "Generative Recommendation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/8fe206daf248f9d554b8f72822516d89245de282.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Multimodal Quantitative Language for Generative Recommendation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v7a4KET0Md
Inverse Reinforcement Learning with Switching Rewards and History Dependency for Characterizing Animal Behaviors
main
Active
neuroscience;decision-making;inverse reinforcement learning
applications to neuroscience & cognitive science
3;3;6
4;3;3
2;2;3
1;2;4
2;3;3
4
3.333333
2.333333
2.333333
2.666667
-0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- **History length**: How does the choice of history length L in the action-level dependency affect the performance of SWIRL? Did you experiment with values beyond 1 and 2, and if so, what were the findings? What trade-offs did you observe between increased history length and computational complexity? What guidelines would you suggest for choosing L in practice?\n- **Computational complexity**: Can you provide more details on the computational complexity of SWIRL compared to baseline models? How does it scale with the size of the dataset and the length of the history dependency? Could you provide specific runtime comparisons on the datasets used in the paper? Additionally, can you provide insights into the convergence properties of the EM algorithm?\n- **Hyperparameter selection**: How were hyperparameters, such as the temperature parameter α in the soft-Q iteration, selected? Was any hyperparameter tuning performed, and if so, what criteria were used? Did you employ any cross-validation procedures to ensure the robustness of the results? How sensitive is the model to the choice of initial parameters?\n- **Limitations**: Could you elaborate on any limitations of SWIRL in modelling certain types of animal behaviours? Are there situations where history dependency might not adequately capture the decision-making processes? How might SWIRL perform on behaviours with very long-term dependencies that extend beyond the history length L? Also, does this framework work in both discrete and continuous state/action spaces?\n- **Robustness to noisy data**: How does this framework handle noisy or incomplete data, which are common in real-world animal behaviour datasets? Did you assess the robustness of the model under such conditions?\n- **Minor:** line 83: In the experiment \"section?\" - or \"In the Results section,\"\n- **Discussion suggestion:** Could this framework be used to provide evidence whether models of intrinsic reward (e.g. expected free energy or empowerment) are indeed able to capture animal behaviour?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- **Originality/innovation**: The paper presents a novel extension to IRL by incorporating history-dependent reward functions, addressing a significant gap in modelling complex, naturalistic animal behaviours. This is the first IRL model to integrate both decision-level and action-level history dependency.\n- **Quality/empirical validation**: The authors provide a thorough mathematical formulation of SWIRL, including detailed explanations of how history dependency is incorporated at different levels, and a clear demonstration of improvements over baseline methods. The choice of the authors to use both simulated and real-world datasets strengthens the validation of their approach.\n- **Clarity**: The paper is generally well-written, with clear explanations of the concepts and methods. The connection drawn between SWIRL and traditional autoregressive dynamics models helps to contextualize the work within existing literature.\n- **Significance**: The SWIRL framework offers a more accurate model of animal decision-making. Hence, I believe it has the potential to advance our understanding in neuroscience and behavioural science, opening up new ways for analysing long-term, complex behaviours driven by intrinsic motivations. Finally, the presented experiments are (in theory) reproducible with public datasets and publicly available code." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces SWIRL (SWItching IRL), a novel framework that extends inverse reinforcement learning (IRL) by incorporating time-varying, history-dependent reward functions to model complex animal behaviour. Traditional IRL methods often assume a static reward function, limiting their ability to capture the shifting motivations and history-dependent decision-making observed in animals. SWIRL addresses this limitation by modelling long behavioural sequences as transitions between short-term decision-making processes, each governed by a unique reward function. It incorporates biologically plausible history dependency at both the decision level (transitions between decision-making processes depend on previous decisions and environmental context) and the action level (actions depend on the history of states within a decision-making process). The authors apply SWIRL to simulated data and two real-world animal behaviour datasets, demonstrating that it outperforms existing models lacking history dependency, both quantitatively and qualitatively. They also highlight connections between SWIRL and traditional autoregressive dynamics models, arguing that SWIRL offers a more generalized and principled approach to characterizing animal behaviour.\n\nI think this is a very interesting and well-written paper. I gave a score of 6 but I am willing to reconsider this score if the authors can adequately address my concerns, particularly regarding the methodological details, hyperparameter selection, and theoretical analysis." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Methodology**: Some aspects of the implementation, such as hyperparameter selection and the specifics of the inference algorithm, are not fully detailed. Providing more information on these would enhance reproducibility and allow for better assessment of the method. There is limited analysis on hyperparameter sensitivity or discussion of how to choose the history length L in practice. In addition, the impact of number of hidden modes is not thoroughly explored. Finally, there is a lack of a theoretical analysis that would strengthen the paper, such as providing convergence guarantees, a discussion on optimality conditions.\n- **Scalability**: The computational complexity of SWIRL, especially with history dependency and the EM algorithm, may pose challenges for large datasets. The paper would benefit from a discussion on scalability to larger state/action spaces and potential optimization strategies. Could also benefit from runtime comparisons with baseline methods.\n- **Biological plausibility**: While the model is said to incorporate biologically plausible history dependency, the paper could provide more evidence or discussion on the biological validity of the specific mechanisms used." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Why is the problem modelled as a hidden-mode MDP rather than a POMDP or Hierarchical RL setting?\n\nOverall I think this paper has potential, and if the issues with the experimental validation and related work discussed above are corrected I would be happy to increase my score." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The method seems to work well and produces nicely interpretable result on the real world dataset.\n * The paper is well written and reads quite nicely." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the problem of inverse RL in a hidden-mode MDP, i.e. an MDP with an additional hidden-mode parameter that affects the reward. The authors propose an EM-style algorithm that learns both the reward function/policy and hidden mode in the given expert trajectories.\nThe authors then validate their approach on a synthetic gridworld task and go on to use it to model animal behavior in a rat maze, where the hidden mode represents the rat's current objective (i.e. get water, explore, go home)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The literature review is limited. For example, a different interpretation for this problem setting would be a POMDP, for which previous IRL literature exists, e.g. [1]. Yet another interpretation of the mouse experiments would be a hierarchical RL setting, for example an Option setting in which options might be \"get water\" \"explore\" \"go home\". For this setting previous IRL method exist as well, e.g. [2]. I'm not sure if they are directly applicable to this paper's problem setting, but I think they might be applicable?\n * The method seems to be limited to relatively small discrete state and action sets, limiting it's general applicability.\n\n**The presentation of the experiments is misleading**\n * The most competitive baseline was labeled as \"I-1\" in plots, while the poorly performing baselines are labeled with their names. This might lead to it being confused as the author's contribution and is thus highly misleading and should definitelyl be corrected.\n * It is not clear how often each experiment was run, or how the box plots were created. How were outliers selected? Figure 3E eliminates the best result for baseline I-1.\n * It is also not clear what shaded areas in Figure 4B represent.\n * The MaxEnt baseline is missing in Figure 4? Why?\n\nMinor points\n * L453 refers to an appendix which seems to be missing?\n * L202 $\\xi$ is never defined in the paper?\n\nReferences: \n[1] Choi et al \"Inverse Reinforcement Learning in Partially Observable Environments\", JMLR, 2011\n[2] Chen et al. \"Option-Aware Adversarial Inverse Reinforcement Learning for Robotic Control\", ICRA, 2023" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "**Complexity and scalability of the algorithm**\n\nOne major issue with variants of MaxEnt IRL methods (even with fixed reward function) is that the computing complexity would be very high since within each loop one needs to solve a forward problem to obtain the policy under the current reward function estimation. This problem would be further scaled up when there are multiple reward functions that needed to be evaluated via solving a RL problem (e.g., in equation (6) of this paper). So my question would be:\n\n1. Is SWIRL guaranteed to converge under some large environments? For example, if the size of the simulated gridworld environment in section 4.1 is not $5 \\times 5$ but, say, $50 \\times 50$?\n2. Following the last question, what is the relationship between computing time of SWIRL and the cardinality of the state, action, and the hidden mode space? This would be helpful to understand what is the maximum capacity of this method, i.e., to which scale of the environment that SWIRL can still handle. To address these questions, it would be helpful if the authors can provide some empirical results on the computing time of SWIRL under different setups.\n3. I believe this question would be more suitable for a future work, but just out of curiosity, would SWIRL be possible to integrate some function approximations (of e.g., the state, action, or hidden mode space, since it is now already using gradient method for optimization) so that it can still be applied in high-dimensional tasks?\n\n**Experiments**\n\n1. How did the author choose the number of hidden modes? In the three experiment discussed in this paper, the total number of hidden modes varies across experiment (2 for the first experiment, 3 for the second, and 5 for the last), so I would be curious to know what is the criterion of the authors' to select this hyperparameter. Or did I missed some part such that this number is learnt by the algorithm automatically?\n2. The discussion about the reward maps in section 4.3 (lines 492--505) is very hard to follow, as well as the related figures (Figure 4A). In general, I may propose that the results shown in Figures 4A and 4C and corresponding text do not really help for supporting the major claim about SWIRL. This could be due to the lack of space so that lots of experiment details are omitted, and this part would be more suitable for a scientific journal but not a conference. I may recommend the author to change the way of presenting the results.\n\n**Math writing**\n\nSome modifications that would be helpful to increase the readability of the paper are:\n\n1. Numbering of equations in the paper is sloppy, e.g., all equations in the paper are labeled, though some of them are neither referenced nor needed for further discussion. I would recommend labeling only those equations that are referenced later. For example, in lines 660--671, four lines of a single equation is labeled \"(8)\", but then in lines 685--698, all lines within the same expression are given a different label from \"(10)\" to \"(15)\". Or are there any special meanings that I missed?\n2. Some display-mode equations are missing punctuation.\n3. Line 136, \"$R$ corresponds to the reward function $r \\in \\mathbb{R}$\": What is the difference between \"$R$\" and \"$r$\" in this notation? According to the rest of the paper (where the notation \"$R$\" never occurrs again), I would assume they both represent the reward function, then why introducing two different symbol for the same meaning? Besides, the notation \"$r \\in \\mathbb{R}$\" defines the reward function \"$r$\" as a real number, but subsequent reference of \"$r$\" indicates that it is actually a function on the cartesian product of the state space and action space into the real line. I would recommend make sure the definition of \"$r$\" is consistent through the paper for a better clarification.\n\n4. Line 170: The symbol \"$\\mathcal{S}^L$\" is used without definition. Does that mean the cartesian product of $L$ state spaces?\n5. Line 231, \"$\\forall s \\in \\mathcal{S}, z \\in Z$\": Did you mean \"$z \\in \\mathcal{Z}$\"?\n\n6. In the appendix, I only see section A.1.1 but no further sections. Is there are missing section after that? If not, I would suggest remove the subsubsection label for this part.\n\n7. In the derivations in the appendix, the range of some summation is not consistent and not clear. For example, in lines 661--671, the summation is given by \"$\\sum_{n}$\", but in subsequent lines (e.g., line 677) the same summation (I assume) is given by \"$\\sum_{n = 1}^N$\". Is \"$\\sum_{n}$\" just a slang for \"$\\sum_{n = 1}^N$\", or the range in the two sums are actually different?\n\n8. Line 701--703: The notation \"$Q(\\theta, \\theta^k)$\" is used without definition. According to the context, I assume it is a typo and the authors would have really meant \"$G(\\theta, \\theta^k)$\".\n\n\n**Minor**\n\nIn Figure 3, the color for mode \"water\" and \"explore\" is very hard to distinguish if the reader only has access to the printed paper (especially in Figure 3F). Similar issue exists in Figure 4, but not so severe. Consider revising." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This work is well motivated by recent advances in applying IRL methods for characterizing animal complex behavior. By incorporating history-dependent policies and rewards, SWIRL outperformed the state-of-the-art in understanding animal decision-making processes." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a framework called SWIRL that extends traditional IRL to capture the complex, long-term decision-making processes of animals in natural environments. SWIRL incorporates biologically plausible history dependency at both the decision-level and action-level, allowing it to better account for how past decisions and environmental contexts shape current behavior. The algorithm is evaluated on both synthetic and real-world datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "•\tNovelty: Insufficient comparison with Nguyen et al.'s 2015 work, which leaves SWIRL's unique contributions unclear.\n\nThe difference between SWIRL and the previous work by `Nguyen, Quoc Phong, Bryan Kian Hsiang Low, and Patrick Jaillet. \"Inverse reinforcement learning with locally consistent reward functions.\" Advances in neural information processing systems, 28 (2015)` needs further discussion to clarify the novelty of this work.\n\nUnder the similar IRL framework with multiple locally consistent reward functions, Nguyen's algorithm proposed that the transition kernel of reward functions can be dominated by some external inputs. In this case, could SWIRL then be considered as a special case of Nguyen's algorithm with the external input defined as $(s_1, \\ldots, s_L)$? Although the RL inner-loop in the two algorithms are slightly different, but I suppose it is not the major difference between SWIRL and Nguyen's work.\n\nIt would be helpful if the authors could provide a more detailed comparison between these two algorithms, as well as some additional experiments to compare the performance between them.\n\n•\tUnclear scalability of SWIRL, particularly in large environments or with high-dimensional tasks.\n\n•\tLack of transparency in choosing the number of hidden modes across experiments.\n\n•\tThe math writing of this paper is a bit difficult to follow (see below)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We develop an novel inverse reinforcement learning framework that can model the history-dependent switching reward functions in complex animal behaviors" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024inverse,\ntitle={Inverse Reinforcement Learning with Switching Rewards and History Dependency for Characterizing Animal Behaviors},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v7a4KET0Md},\nnote={under review}\n}" }, "abstract": { "value": "Traditional approaches to studying decision-making in neuroscience focus on simplified behavioral tasks where animals perform repetitive, stereotyped actions to receive explicit rewards. While informative, these methods constrain our understanding of decision-making to short timescale behaviors driven by explicit goals. In natural environments, animals exhibit more complex, long-term behaviors driven by intrinsic motivations that are often unobservable. Recent works in time-varying inverse reinforcement learning (IRL) aim to capture shifting motivations in long-term, freely moving behaviors. However, a crucial challenge remains: animals make decisions based on their history, not just their current state. To address this, we introduce SWIRL (SWItching IRL), a novel framework that extends traditional IRL by incorporating time-varying, history-dependent reward functions. SWIRL models long behavioral sequences as transitions between short-term decision-making processes, each governed by a unique reward function. SWIRL incorporates biologically plausible history dependency to capture how past decisions and environmental contexts shape behavior, offering a more accurate description of animal decision-making. We apply SWIRL to simulated and real-world animal behavior datasets and show that it outperforms models lacking history dependency, both quantitatively and qualitatively. This work presents the first IRL model to incorporate history-dependent policies and rewards to advance our understanding of complex, naturalistic decision-making in animals." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "neuroscience", "decision-making", "inverse reinforcement learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4d53fc3b184ea24ae9c79b753b8f2e0059aed139.pdf" }, "presentation": null, "primary_area": { "value": "applications to neuroscience & cognitive science" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Inverse Reinforcement Learning with Switching Rewards and History Dependency for Characterizing Animal Behaviors" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v7aeTmfGOu
GenoAgent: A Baseline method for LLM-Based Exploration of Gene Expression Data in Alignment with Bioinformaticians
main
Active
Multi-agent;Bioinformatics
applications to physical sciences (physics, chemistry, biology, etc.)
3;3;5;5
5;5;3;3
2;2;3;4
2;2;2;3
2;2;2;4
4
4
2.75
2.25
2.5
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "This is a list of minor comments, questions and suggestions:\n\n- On line 351: \"an experienced researcher adjudicating the annotation by selecting the better analysis and making further refinements\".\nWhat do these \"annotations\" refer to? I assume that these doesn't refer to gene annotations \n(correct me if wrong). Without this information is really hard to evaluate what \"Inter-Annotator Agreement (IAA)\" refers to.\n\n- On line 418, \"complexity of clinical data extraction and the need for nuanced knowledge inference\". Could you please elaborate more on what does this mean?\n\n- On line 424, \" allowing several LLMs or agent-based models to achieve decent performance\".\nIt seems to me that in the previous tasks, different models could also perform differently. Please, correct me if wrong.\n\n- What does \"merged data\" refer to in Table 4?\n\n- In Table 5, what is really the difference between GenoAgent (based on GPT-4o) and GPT-4o itself?\nI understand that the former doesn't have agents involved, is this correct?\n\n- It would have been great to comment on the \"Statistical analysis\" results reported.\nWhat were the expected results? Are the results reported positive?\n\n- Is \"Dataset selection and filtering\" with GenoAgent performed entirely with metadata\nfrom the datasets only?\n\n- On line 395: \"measuring their performance in gene identification from raw input data\"\nWhat does \"raw input data\" refer to here?\n\n- I am assuming that the goal of \"end-to-end performance\" in Section 5.1\nis to measure the performance in identifying significant genes related to traits.\nThen I assume that this is a multi-label classification problem. Is this correct?\nIf so, how many classes are considered for the results? How imbalanced is the dataset?\n\n- From Table 5, is there any advantage here to using GenoAgent against the other models?\n\n- On line 417: \"However, preprocessing of trait data was significantly\nweaker, with a CSC score of 32.28%, due to the complexity of clinical data extraction and the need for nuanced knowledge inference\"\nCould you please elaborate more on this?\n\n- What is the total number of datasets in the results of Table 2?\n\n- Table 2 says that it reports \"F1 and Accuracy\" for \"DF\" and \"DS\", which one of the two is reported?\n\n- About the results reported in Table 4, I assume that the results are indeed good,\nbut without any context or reference it is hard to evaluate whether CSC of 79.71 is \na good or a bad result. Would it be possible to provide some context on this?\n\n- This is not an issue, but in Table 5, results with \"Llama 3 (8B)\" are reported. While it is great to have a comparison of\ndifferent models, Llama 3 (8B) is known to be comparably worse than e.g. GPT-4. The results reported with this model are not really very informative. For more significant results, a more powerful model of the Llama series, e.g. Llama 3 (70B), or Llama 3.1 (70B), that is claimed to be on par with close-models, should have been used.\n\n- On line 52 \"However, these studies have mostly focused on simplified synthetic datasets\"\nCould the authors please provide some references in this regard?\n\n- On line 150: \"The agent selects the optimal tool by minimizing both time and error\"\nCould the authors please explain why (and how) is time and error minimized?\n\n- It seems to me that the reference \"BPC (2023)\" does not belong to a serious\nscientific work. I am sure that other references could be used here.\n\n- The reference used for Llama 3 seems to be incorrect, and is incorrectly spelled \nas \"Lamma\" instead of \"Llama\". The technical report where these models \nwere reported can be found at https://arxiv.org/pdf/2407.21783.\n\n- The presented work seems to have potential ethical issues that may have\nnot been addressed or at least mentioned in the paper. For instance, since\nthe authors propose a potential method for predictive medicine, it is \nimportant to note that a method with poor accuracy could lead to wrong\ndiagnosis and treatment. Besides this, is there any potential risk\nin LLMs leading to biases in genomic analysis?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The paper is generally well-written, and in good English. The motivation\nof the problem is clearly stated and the related work is well covered.\nThe method proposed is really innovative and promising. Additionally,\nthe authors define carefully the tasks undergone to curate the dataset\nand the steps reported seem reasonable. Generally, the results are positive\nand I believe that the paper has the potential to be a great contribution\nto the field." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces GenoAgent and LLM agent-based framework for gene expression\nanalysis tasks. This framework mainly consists of a project manager agent,\ndomain expert, data engineer, code reviewer and statistician agents.\nGenerally, the method is able to solve trait-related questions from raw data\nby leveraging statistical (code-based) tools. To evaluate the proposed\nmethod, the authors curate the GenoTEX benchmark, a benchmark of unconditional\nand conditional pairs of traits genes, which is also manually adjusted by experts. \nThe results generally show great performance of the proposed method in the\ntasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I am mainly concerned about two major issues in the paper:\n\n- The first one is regarding the availability and clarity of the reported\nmethods. The authors claim that the GenoTEX benchmark will be a great \ncontribution to the field. Does this mean that the benchmark will be\npublicly available? If so, it would have been great for the authors\nto provide (if possible) an anonymized link to the benchmark.\nRegarding source code, the authors do not mention if the source code\nwill be available and no link/supplementary material is provided.\nAgain, it would be very positive for the authors to release\nthe source code for the proposed method, as this would allow for\nthe reproducibility of the results. Otherwise, there are parts of the \nwork that might seem obscure to the reader, e.g. what are the statistical\ntools used by the agents and how? How are the agents and their communication implemented?, etc.\n\n- The second issue is regarding the clarity of the results. The authors\nprovide a good description of the curation process of the benchmark,\nhowever, the results provided are not very detailed, which makes it \nhard for the reader to understand wether the authors achieved the\ngoals of the paper and how well the proposed method performs.\nI would suggest the authors to provide a more clear, detailed and\na comprehensive description of the results, including a more detailed \ndescriptions of the tasks at hand, the significance of the metrics used\nand to provide insightful comments that could serve the community\nor the reader of the paper. Please see the questions below for a few examples\nof the issues regarding the clarity of the results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Have you tried your system with different backbones? You have claimed that you use GPT-4o as the backbone LLM. I’d like to know the performance of Llama3-8B or other open-source models. \n2. How could you ensure no data leakage on the gene identification problems? Could you show the comparison experiments between your agent and other LLMs? You have claimed that MetaGPT cannot generate runnable code for the preprocessing of gene data. How could you make sure that your model could generate runnable code? Only by iterations and reviews? In addition, since the code reviewer is still based on LLMs, this agent is also based on the ability of LLMs. It doesn’t contain any fine-tuning stages, so how could you make sure your agent could generate runnable code?\n3. How about the success rate for generating code? \n4. In addition, the authors have claimed a set of function library L. Is this process automatic? Or just by prompting LLMs?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Proposed an agent system to explore gene expression datasets.\n2. Proposed a benchmark dataset, GenoTEX." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "They introduce GenoAgent, a team of LLM-based agents designed with context-aware planning, iterative correction, and domain expert consultation to explore gene datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The experiments are not enough." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The paper mentions that feedback from the Code Reviewer is sometimes inconsistent. How frequent are these inconsistencies, and how do they typically impact GenoAgent’s task outcomes? Could the authors provide data on the frequency of conflicting or erroneous feedback in the code review process that leads to downstream errors in the analysis?\n\n2. A comparative table or discussion on how GenoAgent specifically improves upon or extends prior work would help emphasize its novelty.\n\n3. A discussion of runtime, memory usage, or potential computational optimizations (such as modularizing tasks or limiting interactions between agents) would aid in understanding the feasibility of GenoAgent for widespread use. Since the analysis pipeline can be resource-intensive, identifying strategies to minimize costs would be crucial for scaling GenoAgent in practical environments.\n\n4. While the paper includes error analysis, it could benefit from a deeper examination of errors in statistical analysis and preprocessing steps. Are there specific types of errors (e.g., issues with gene symbol normalization or confounding factor adjustments) that are more prevalent, and what are the proposed fixes?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThis paper is among the first to apply an LLM-based multi-agent system to gene expression data analysis, simulating collaborative workflows typical in bioinformatics teams.\n\n2.\tDataset Contribution: The GenoTEX benchmark provides a valuable resource for future research in AI-driven genomics data analysis, offering a standardized framework for evaluating model performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents GenoAgent, a novel framework leveraging a team of LLM-based agents to automate the exploration and analysis of gene expression data, addressing challenges in scalability and expertise demands in genomics. Each agent in GenoAgent has a specialized role—such as data engineering, statistical analysis, code review, and domain expertise—mimicking a collaborative bioinformatics workflow for tasks like dataset selection, preprocessing, and statistical analysis. Additionally, the authors introduce GenoTEX, a benchmark dataset designed for evaluating and advancing automated methods in gene analysis. Experiments demonstrate that GenoAgent achieves promising accuracy in gene identification, with iterative error correction and expert consultation mechanisms enhancing its overall performance, while GenoTEX provides a resource for consistent, real-world evaluation of automated genomics tools." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. GenoAgent has been tested on benchmark datasets, but applying it to novel, unseen datasets in real-world genomics research to assess its robustness under different conditions would enhance the effectiveness of this work.\n \n2. While GenoAgent introduces a structured, team-based approach to LLM-driven gene expression analysis, the novelty of its methodology could benefit from deeper differentiation from existing frameworks in multi-agent LLM systems and automated bioinformatics workflows." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Q1: How reliable is prompt engineering since it does not really eliminate the possibility of hallucinating?\nQ2: Can improving the preprocessing of clinical data improve the performance? If yes, what strategies have the authors considered?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "S1: The authors use different agents that are responsible for different roles in the GenoAgent pipeline. These agents are LLM-based and the authors have clearly defined the responsibilities each “agent” would undertake such as developing guidelines or reviewing historical actions and current context.\nS2: There are specialized roles in the pipeline as well – “Project Manager”, “Data Engineer”, “Statistician”, “Code Reviewer” and “Domain Expert”. These roles bring about modularity in GenoAgent which in turn would help with issues like troubleshooting.\nS3: The creation of the GenoTex benchmark is novel in its contribution to the genomic data analysis since it comprises of gene identification problems, data from open gene expression databases, manual analysis data, quality control and assessment." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors in this paper present GenoAgent which comprises of LLM-based agents that are responsible for different roles. They also present a benchmark, GenoTEX for automatic exploration of gene expression data and use this to assess different tasks such as automation in genomics. They report F1, Precision, Recall, Accuracy and Jaccard similarity between gene sets (trait and condition)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1: The performance metrics, especially using F1 and accuracy are not very expressive in understanding just how good the pipeline is. F1 can sometimes not fully capture data filtering requirements and this is very crucial in genomics.\nW2: There are several ambiguous statements, ex: “the process of gene expression data analysis with good overall Accuracy” or “The agents show decent performance, likely” which do not correctly reflect the accuracy or performance.\nW3: The feedback mechanism does not seem entirely reliable in the Code Reviewer agent, especially when it shows diminishing performance after the first feedback round. This issue is quite critical when it comes down to understanding the robustness of GenoAgent.\nW4: Data normalization can be further refined to reduce variability for traits. \nW5: While GenoTEX benchmark is novel, the authors do not provide an exhaustive comparison with any existing domain-specific methods. \nW6: What are the computational costs for using computationally expensive models like GPT-4o and other LLMs? To add to this, all the agents are LLM-based. Have the authors kept this in check?\t\n\nTypos:\n1. Line 57, why are “D” in “Data”, “Auto” in “Automatic” and “E” in “Exploration” made bold when it is not used in the abbreviation “GenoAgent”?\n\nReferences:\n1. Line 501 (part of the url) is spilling outside the margin. \n2. Incorrect citations for several references (ex: lines 496, 521, 540, 575, 579, 591, 658, 694 ). I have listed some of the corrected references at the bottom (Harvard format) to give the authors an example.\n3. Citation format inconsistent (and possibly incorrect) in line 593.\n4. There is inconsistency in the format for the references in general. \n5. Medium articles are often unchecked, unverified facts wherein the scientific rigor can be easily questioned. Authors have cited some, like “BPC (2023)” on line 38/39.\n\nLine 496: Besta, M., Blach, N., Kubicek, A., Gerstenberger, R., Podstawski, M., Gianinazzi, L., Gajda, J., Lehmann, T., Niewiadomski, H., Nyczyk, P. and Hoefler, T., 2024, March. Graph of thoughts: Solving elaborate problems with large language models. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 16, pp. 17682-17690).\nLine 521: Dong, Y., Jiang, X., Jin, Z. and Li, G., 2024. Self-collaboration code generation via chatgpt. ACM Transactions on Software Engineering and Methodology, 33(7), pp.1-38.\nLine 540: Guo, T., Nan, B., Liang, Z., Guo, Z., Chawla, N., Wiest, O. and Zhang, X., 2023. What can large language models do in chemistry? a comprehensive benchmark on eight tasks. Advances in Neural Information Processing Systems, 36, pp.59662-59688.\nLine 575: Ma, P., Ding, R., Wang, S., Han, S. and Zhang, D., 2023, December. InsightPilot: An LLM-empowered automated data exploration system. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations (pp. 346-352)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A Baseline method for LLM-Based Exploration of Gene Expression Data in Alignment with Bioinformaticians" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024genoagent,\ntitle={GenoAgent: A Baseline method for {LLM}-Based Exploration of Gene Expression Data in Alignment with Bioinformaticians},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v7aeTmfGOu},\nnote={under review}\n}" }, "abstract": { "value": "Recent advancements in machine learning have significantly improved the identification of disease-associated genes from gene expression datasets. However, these processes often require extensive expertise and manual effort, limiting their scalability. Large Language Model (LLM)-based agents have shown promise in automating these tasks due to their increasing problem-solving abilities. To leverage the potential of agentic system, we introduce GenoAgent, a team of LLM-based agents designed with context-aware planning, iterative correction, and domain expert consultation to collaboratively explore gene datasets. GenoAgent provides generalized approach for addressing a wide range of gene identification problems, in a completely automated analysis pipeline that follows the standard of computational genomics. Our experiments with GenoAgent demonstrate the potential of LLM-based approaches in genomics data analysis, while error analysis highlights the challenges and areas for future improvement. We also propose GenoTEX, a benchmark dataset for automatic exploration of gene expression data, and also a promising resource for evaluating and enhancing AI-driven methods for genomics data analysis." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multi-agent", "Bioinformatics" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/417ff9c0b9dd312f1b37fe16d885b0084c3f841e.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "GenoAgent: A Baseline method for LLM-Based Exploration of Gene Expression Data in Alignment with Bioinformaticians" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v8GuB74YRA
Generalizable Transferability Estimation of Foundation Vision Models via Implicit Learning
main
Active
Transferability Estimation;Transfer Learning
transfer learning, meta learning, and lifelong learning
1;5;5;5;5
4;4;5;4;4
2;3;3;2;3
1;3;3;2;3
2;3;2;3;2
4.2
4.2
2.6
2.4
2.4
0.25
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- I kindly suggest resolve the confusions mentioned in the Weakness part for the \noverall clarity of the work.\n- While Figure 7 is placed in the appendix, it can effectively offer insights into\nthe evolution of the embedding distribution. I kindly suggest to put it to the\nmost visible point in the paper and compare the evolution process of the ITM and\nother existing transferability estimation approaches to demonstrate the\nsuperiority of the proposed framework visually.\n- It might be better to move the Table 4 into the main content to better support\nthe effectiveness of the proposed framework.\n- Since LEAD is cited and can outperform the evaluated PED and SFDA, I am interested\nin whether IMT could outperform it.\n- What would happen if n is not fixed to 1? Would this lead to better results at\nthe cost of higher computational efficiency? I kindly suggest to carry out\nexperiments to discuss this in more detail, as this iterative optimization is the\ncore part of the DEA.\n- The PCO strategy uses a strong inductive bias that the static distribution of \neach class should be well-separated. Since this might not be always valid, I am\nvery interested in the applicability of IMT to other computer vision tasks like\nimage segmentation. Would there be any modifications to the framework to facilitate\napplying it to other downstream tasks?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The proposed Implicit Transferability Modeling (ITM) framework introduces a fundamentally new approach to transferability estimation, distinguishing itself from existing models such as PED, SFDA, and LEAD. ITM not only improves model effectiveness but also addresses computational efficiency through the Divide-and-Conquer Adaptation (DCA) and Pseudo-Clustering Optimization (PCO) strategies, which optimize processing time and resource usage.\n- The work further evaluates the applicability of transferability estimation on advanced architectures like Vision Transformers (ViT) and cutting-edge pre-training paradigms (e.g., MAE, SimMIM). This alignment with recent advancements in computer vision enhances the relevance and applicability of ITM within the current research landscape.\n- The work is strengthened by rigorous mathematical derivations, which provide theoretical underpinnings that enhance clarity and make the methodological flow accessible and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel implicit transferability modeling approach to address the challenges in estimating transferability of pre-trained models due to the varied architectures and training strategies. By incorporating newly proposed modeling and optimization strategies, the resultant framework demonstrates superior performance than existing methods across ten datasets with various model architectures and pre-training paradigms without the need of extensive training and computational resources." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Figures are not well-used in the paper.\n - Too many figures (i.e., Figure 1 and 2) are used to describe the well-known problems in the field of transferability estimation.\n - Figure 1 is hard to understand at the first glance, particularly when it includes many abbreviations that are explained in the later literature.\n - Too much contents are included in the Figure 3, making it difficult to read\n at the first glance.\n- There are several confusions in the paper:\n - If Equation (7) is re-written from Equation (6) by denoting C, then a $W^n$ term\n is missing in the formula. I doubt the mathematical correctness of Equation (7)\n and the related discussion on the benefits of this recursive form.\n - In Section 3.4, it has been mentioned that $\\lambda$ is a hyper-parameter and\n is controlled by C. However, the paper doesn't specify how this is achieved.\n - In Section 3.4, it has been mentioned that the embedding pre-standardization\n is applied for various benefits. However, it doesn't specify how this is achieved\n as well.\n - In Section 3.4, it has been mentioned that the iteration number in DCA is fixed\n to 1. However, it doesn't specify the purpose of this setting. Also, if there is\n only one iteration required, what would be the benefits of transforming the update\n of the embedding space into a recursive formulation as in Equation (7)?\n - In the Conclusion, it has been claimed that \"We conduct experiments on recent \n models and a diverse set of downstream tasks,...\". However, experiments are only\n conducted on the image classification task." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What are the main contributions or innovations of the embedding space division and dynamic equation-based approach compared to existing methods?\n- For different generation methods in Pseudo-cluster center constraints, could the authors provide theoretical analysis alongside the empirical results?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Easy to follow\n- Tested on multiple datasets" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an Implicit Transferability Modeling (ITM) paradigm to improve the accuracy of embedding space modeling, with experimental validation conducted across multiple datasets. Overall, the paper looks interesting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In Lines 240-243 and 268-269, the authors state that 'the division and the recursive formulation reduce overall complexity.' It is recommended to substantiate this claim with both theoretical analysis and quantitative experiments.\n2. What are the main contributions or innovations of the embedding space division and dynamic equation-based approach compared to existing methods?\n3. For different generation methods in Pseudo-cluster center constraints, could the authors provide theoretical analysis alongside the empirical results?\n4. It is recommended to include more comparison methods from recent 2024 publications.\n5. A discussion on the limitations of the proposed approach would be valuable.\n6. Thorough proofreading and refinement of illustrations would enhance the clarity and quality of the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How is the independence of the K subspaces ensured? If the method involves dimensional grouping, does this assumption of independence hold true in practical scenarios?\n\n2. The random generation of pseudo-cluster centers, as mentioned in Weakness 2, seems counterintuitive. A more thorough explanation would enhance understanding\n\n3. I find it noteworthy that the predicted state in Equation (7) exhibits a similar format to the predictions made in LEAD[3]. Both methodologies utilize dynamic equations to model the evolution process, yielding initial and final state interpolations. It would be beneficial to provide a more detailed comparison between their method and LEAD, highlighting the key differences and potential advantages of their approach. This would help clarify the novelty and contribution of this work.\n\n4. The experimental results indicate that previous dynamic evolution-based methods perform poorly on the proposed benchmark. A detailed explanation, accompanied by visualizations (e.g., Figure 7), illustrating the advantages of the proposed method in modeling the evolution process compared to earlier approaches would strengthen the discussion.\n\n5. The absence of evaluation on common benchmarks (e.g., SFDA, Logme) raises concerns regarding the generalizability of the method. I recommend supplementing the results with additional evaluations on these established benchmarks to demonstrate the method's applicability.\n\n[3] LEAD: Exploring Logit Space Evolution for Model Selection." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The study addresses a significant research area: model selection. The ability to choose an appropriate pre-trained model for specific downstream tasks is crucial and has the potential to enhance performance outcomes.\n\n2. Great visualization of the proposed method, providing clear insights into the framework's structure and functionality." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents an Implicit Transferability Modeling (ITM) paradigm that employs an implicit modeling strategy to capture the intrinsic properties of pre-trained models. The ITM framework integrates a Divide-and-Conquer Adaptation (DCA) process for efficient modeling of independent subspaces, along with a Pseudo-Clustering-based Optimization (PCO) strategy to enable effective estimation without extensive fine-tuning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In line 225, the methodology mentions \"K independent subspaces sampled from E\"; however, the process for obtaining these divisions is not adequately described. The criteria for determining K and the methodology used to derive these independent subspaces require further clarification. It would be helpful if the authors could provide specific examples of how K is determined and the practical sampling process for the subspaces.\n\n2. In line 307, the generation of pseudo-cluster centers is noted to involve various methods, including the use of random vectors from high-dimensional space. This approach appears to conflict with conventional methods (centers are determined by some clustering methods) but lacks sufficient explanation within the text.\n\n3. The evaluation of the proposed method is conducted solely on a single benchmark, omitting more widely recognized benchmarks (used in SFDA[1], Logme[2]), which limits the robustness of the findings.\n\n[1] Not All Models Are Equal: Predicting Model Transferability in a Self-challenging Fisher Space.\n\n[2] LogME: Practical Assessment of Pre-trained Models for Transfer Learning" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The questions are mostly related to the major weakness mentioned:\n\n1. Please run a larger set of ranking experiments, using subsampling from a (larger) model-zoo and share these results, preferably in some scatter plot (eg LogMe vs ITM) on all 10 target datasets.\n2. Please specify the final transferability scores s of the ITM method for model i and a target task.\n3. Please explain what Ê (E-hat) is in Eq 7 and how it is obtained (without full fine-tuning).\n4. Please provide the loss functions used for Lpc and Lobj and how they are used to compute s. \n5. Please explain how the validation data is used in the ITM method.\n6. Please explain for how many epochs ITM is run over the target data (and how that compares to related work)." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper starts with a reasonably strong analysis of transferability estimation, that these yield inconsistent estimates between different architectures or pre-training methods. This is interesting to see, and yields already some questions. For example, from Fig 1: I mainly observe that LogMe and ETran have a preference of a single network architecture for any target dataset, so their transferability scores seems to be a function of the pre-trained network, more than for the given target dataset. Does that hold for other TE methods as well? Do the target tasks prefer different models, or is there just a single best model for all target tasks? And from Fig 2: this plot makes me wonder what the performance of a supervised ViT model would be, it seems that it is not in the mix of pre-trained models?\n\nThe individual elements of the proposed method (estimating target embedding, using some pseudo clustering) seems to make sense." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper transferability estimation is studied, that is, rank a set of pre-trained networks for a specific target task, using a computational efficient method (the underpinning assumption is that fine tuning all pre-trained networks is too time consuming). In this work, the mapping from the original embedding space (E) of a model from the zoo, to the fine-tuned embedding space (E’) on the target data is estimated via a combination of three loss functions, and this is used to estimate the transferability estimation score. The method is validated on a set of 10 classification tasks datasets, with a model-zoo of 10 models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "# Major weaknesses\n\n1. The biggest weakness is that the method is not clearly presented, while conceptually the elements make sense, the interconnection and the final TE method (or scoring) is not directly clear from the manuscript. To make this weakness more concrete:\n - (major) The final transferability estimation score (s_i) for model i (for a particular target task) is not explicitly defined. It is related to the ideal mapping gamma, but not provided.\n - (major) The loss (eq 7) uses Ê (E-hat), which is defined above Eq 7 as the final embedding state after fine-tuning. Defining a transferability metric which works after fine-tuning is probably not the goal, so please clarify.\n - (major) How is the lambda parameter (Eq 8), controlled by the learning rate (eta) in Eq 7 as stated in L331. How is Eq 8 in general related to Eq 7? Eq 8 defines a joint loss, Eq 7 only an update of the parameters.\n - (major) What is the objective driven loss? How is it defined?\n - (major) In the implementation details it is stated that some of the target dataset is used for validation purposes in ITM. What parts are validated, and when in the pipeline, and how is that used?\n - The difference between mapping Gamma (L222) and mapping Phi (L224) is not clear to me. And what is the difference between psi(.,.)(L224) and psi(.) (L245)?\n - What is z, it seems only implicitly used in the (approximation) of ^Z (Z-hat)? Moreover, the decomposition of q_phi(z|E), is purely based on the independence assumption of E, I don’t think a double bayes rule is necessary to write that down. Finally, how is q used? Is the loss in Eq 4, a sum over all j in K (of E 3)?\n - What is the difference between the subsets A (indexed by j) and the subset of K (also indexed by j)? \n - What does it mean that the process is “treated as an interaction between the model’s transferability and downstream tasks, leading to more effective and adaptable estimation across a broader range of scenarios.” What interaction is used, is the model learned once for multiple downstream tasks? Does it then generalize to others?\n\n2. The second biggest weakness of the current study is the experimental evaluation, although this is a more general problem within transferability estimation literature, but the current experimental evaluation is too weak to draw any conclusions. The main problem is that a single ranking experiment is evaluated, while the outcome of this ranking really does depend on the selected individual models in the model-zoo. This means that adding or removing a single model to/from the zoo all results are likely to be significantly different. This has been studied in [AgostinelliECCV22]. Therefore, the method should be evaluated on a larger set of ranking experiments. This is not too complicated nor too computational expensive: one could draw samples from a larger model-zoo. For example one could draw all sets of 10 models out of a model-zoo of 14 models, this provides (approx) 1000 ranking experiments (14 choose 10). This only requires computing once the scores (s) and ground truths (r) for 4 additional models, while providing 999 more ranking tests. Without such an experiment, I think it is impossible to draw any conclusion on the success of any method.\n\n# Minor weaknesses\n- Figure 1: I’m unsure what I see (and should see) in this plot. It is remarkable that the MAE-B16/SimMIM-B16 models have different GT radar plots between (a) and (b). \n- Figure 2: The y-scales are different from plot (a) and plot (b). \n- (nitpick) In the related work the `cite` command is used, where it should be the `citep`, for example: NCE Tran et al. (2019) → NCE (Tran et al., 2019).\n- (nitpick) The sub-index i is used for the number of elements in the dataset (L186) and for the number of models in the zoo (L187)\n- It is unclear how many epochs over the target task training data are performed by each method. Most transferability estimation methods assume (eg) 1 epoch (to get feature embeddings and target labels).\n- In the abstract and introduction ‘generalizability’ of a transfer estimation method is mentioned, what do you mean with that?\n\n# References\n- [**AgostinelliECCV22**] Agostinelli et al, How stable are Transferability Metrics evaluations?, ECCV 2022." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed.", "Yes, Other reasons (please specify below)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See the weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "[1] The method tackles the problem of transferability estimation. The author conducts exhaustive methods to validate the effectiveness of the proposed methods.\n\n[2] The method seems to be reasonable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper tackles the problem of transferability estimation, which aims at selecting the best pretrained model for specific downstream tasks. The authors propose the implicit transferability method to efficiently model the transfer process, reducing both learning complexity and computational costs. Specifically, they propose a divide and conquer adaptation process to model the transfer process. They also introduce a Pseudo-Clustering-based Optimization (PCO) strategy with static and dynamic constraints to estimate transferability without intensive retraining. Their approach outperforms current methods, showing significant improvements across ten benchmarks, making it highly effective and generalizable for model selection." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "[1] The motivation is unclear. In the introduction (L081-085), the author states two challenges, i.e., the strategy of implicit transferability modeling remains largely unexplored and the implicit modeling process requires the final embedding states of the model after fine-tuning. However, it can not fully understand the where the challenge is. The author more specifically explain the challenges.\n\n[2] The description of the method is not clear. For example. In L221-L222, the embedding space division: The author is suggested to briefly introduce the motivation of division, what to divide and how to divide. Currently, I can not fully understand the motivation and implementations.\n\n[3] The pseudo-clustering accuracy is essential to transferability estimation. The authors are suggested to evaluate the sensitivity of the clustering performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024generalizable,\ntitle={Generalizable Transferability Estimation of Foundation Vision Models via Implicit Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v8GuB74YRA},\nnote={under review}\n}" }, "abstract": { "value": "Transferability estimation aims to identify the most suitable model from a collection of pre-trained models for specific downstream tasks, playing a crucial role in the success of the pre-training and fine-tuning paradigm. However, the recent proliferation of pre-trained models with diverse architectures and training strategies poses significant challenges for transferability estimation due to discrepancies in intrinsic model characteristics, making it difficult for existing methods to accurately simulate embedding space evolution within feasible computational limits. To address these challenges, we propose an Implicit Transferability Modeling (ITM) paradigm that incorporates an implicit modeling strategy for the intrinsic properties of pre-trained models, enabling more accurate transferability estimation. ITM employs a Divide-and-Conquer Adaptation (DCA) process to efficiently model the transfer process, reducing both learning complexity and computational cost. Additionally, we introduce a Pseudo-Clustering-based Optimization (PCO) strategy that eliminates the need for extensive fine-tuning, enabling effective estimation without intensive retraining. Our method significantly outperforms state-of-the-art approaches, achieving notable improvements across ten widely used benchmarks and demonstrating its effectiveness and generalizability in enabling accurate and efficient model selection for downstream tasks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Transferability Estimation", "Transfer Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d826f7789e6f17d19d2af3ad326de87fba994f62.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Generalizable Transferability Estimation of Foundation Vision Models via Implicit Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v8RDgaEtE2
Regression Conformal Prediction under Bias
main
Active
Conformal Prediction;Bias;Uncertainty Quantification
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
1;1;3;5
3;5;4;3
2;2;2;3
1;1;2;2
2;3;2;3
2.5
3.75
2.25
1.5
2.5
-0.454545
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "My main questions are in the weakness section. Below is a question regarding the proof: In the proof of Theorem 2 (line 712), I am confused since it is first stated that $Y_i - f_{\\text{hi}}(\\hat Y_i^{b^{--}}) <0$ and then \n$Y_i > f_{\\text{hi}}(\\hat Y_i^{b^{--}})$." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper studies an important question and makes an interesting \nobservation. Some theoretical attempts have also been made to understand the \nobservation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the efficiency of conformal prediction when the predicted value has a systematic bias, aiming to understand the effect of symmetric/asymmetric quantile adjustment on the corresponding \nconformal prediction length. Under a stylized model, theories are \ndeveloped to understand the behavior of symmetric/asymmetric quantile adjustment.\nThe theory is then evaluated on synthetic and real data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper's main theories (Theorem 2 and 3) are under highly stylized models and the statement lacks rigor (please correct me \nif I misunderstood anything). For example, Theorem 2 is stated for prediction intervals in the generic form of Eq. (2), but it appears that the proof only focuses on the case of residual errors and CQR. For another example, the proof of Theorem 2 seems to use the fact that \n$f_{\\text{hi}}(\\hat Y_i^{b--}) = f_{\\text{hi}}(\\hat Y_i^{0})+b^{--}$, which is not stated as an assumption. The theorem also appears to assume that the bias is sufficiently large in magnitude.\n\n\nI am also not following the form of CQR used in this paper; in my understanding, \nCQR fits conditional quantiles of $Y$ given $X$ by minimizing the pin-ball loss, \nand the resulting fitted conditional quantile function operates $X_i$ instead of the point prediction of $\\hat Y_i$ \n(unless the quantile is obtained in a specific form centered around a point prediction).\nI am also confused by the description of CQR in lines 194-196 --- what is $n_s$, what is the set of samples $\\hat Y^b_i$ \nand how are they generated. If the prediction is deterministic, does this reduce to the residual error case? Please let me know if I misunderstood anything." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Is Romano et al. 2029 the only work where non-symmetric intervals are used and tested?\n- What happens if an adaptive scheme is used to compute the interval? E.g. if the symmetric constant intervals are replaced by reweighted intervals as in Papadopoulous 2008? In particular, how would Theorem 2 change? \n- Would it be possible to compute an upper bound of the bias using the gap between symmetric and asymmetric intervals?\n- Can the proposed approach help study the non-exchangeable situation where the bias is only present at test time? In other words, can one obtain an approximated version of Lemma 1 and Theorem 2?\n- Have you run any experiments on Bias estimation?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- As CP applies to any given model, the results may help establish a good trade-off between model flexibility and efficiency in applications where data are scarce.\n- The algorithm for estimating the bias from the obtained interval is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The proposed method aims to improve uncertainty quantification in the presence of model bias. The authors show that non-symmetric prediction intervals may be more robust to model bias than their symmetric counterparts. The claims are supported by theoretical and empirical results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Non-symmetric CP is not new. The paper's contribution would be better introduced by adding an intuitive explanation of why *asymmetric adjustments have been theorized to yield longer interval lengths as a consequence of stronger guarantees (Romano et al.,2019)* but *has also been empirically observed that asymmetric adjustments yield tighter intervals than symmetric ones.*\n- The authors focus on the setup where comparable bias is present in the calibration and test samples. They should comment on whether it would be more efficient to 1) improve the underlying point-prediction model or 2) consider more flexible conformity scores (e.g. adding and training a free shifting term to the conformity scores).\n- Asymmetric intervals can be defined by splitting the calibration samples according to the residual signs and then evaluating two sample quantiles. It would be helpful to see a theoretical comparison of the gap between the proposed method and such a naive approach as a function of the size of the calibration set." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How do the paper results get used in practice? Since any learning algorithm perfectly mitigates the bias that you present. So, in your case, the bias is not present in the training set but is in the calibration set (so that it can account for this). Can you provide a use case where this occurs?\n- You need to correct the table reference; you see Tab. 4.2.1, while in the caption, it is Table 1. The same is true for the references to the figures, where you use Fig. X, and the caption states Figure X. Change either the captions or the references." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- **Clear and Comprehensive:** The CP mechanisms, the derivation of the bias effects, and the comparison between symmetric and asymmetric intervals are clearly explained.\n- **Formalizing Good Message:** Recently, in the CP field/community, asymmetric adjustments or conformal prediction intervals with symmetric (the same) density are used beneath the lower bound and above the upper bound, which is appropriate for the end-user from an interpretation standpoint. Additionally, there is occasional observation of performance gains using asymmetric adjustment, as the paper rightfully mentions. Formalizing and theorizing these observations is undoubtedly beneficial for the field and practitioners." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies how a systematic homoscedastic bias is present in the point prediction. It studies how it influences the efficiency of prediction intervals produced by inductive conformal prediction (ICP) and conformalized quantile regression (CQR), explicitly comparing the effect of symmetric and asymmetric adjustments. For ICP, the impact of using the absolute residual (symmetric adjustment) and the residual (asymmetric adjustment, controlling the coverage on the left and right) is compared. For CQR, similarly. \n\nThey show that in the case of simple homoscedastic bias, $b$: 1) the upper bound for the size of the intervals of the biased predictions, which of the size of the intervals without prejudice and $2|b|$; 2) the asymmetric adjusted prediction interval is of the same size as there was no bias, and 3) they show under which condition the asymmetric or symmetric adjustment will result in smaller prediction intervals (however, this requires knowledge of the bias)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Elementary Theoretical Findings**: While the theoretical contributions are sound, the results may be considered elementary. The core findings, such as the impact of bias on symmetric intervals and the lack of bias influence on asymmetric intervals, are straightforward and follow from the basic principles of CP. Though helpful, these insights do not introduce particularly advanced or novel theoretical complexity, which may limit the paper's appeal to a more specialized audience seeking more profound theoretical innovations.\n- **Simplified Bias Assumptions**: The assumption of bias as a constant noise term following the same global distribution across all predictions may not adequately capture more complex forms of bias present in real-world data, such as feature-specific or covariate-dependent biases. The paper also mentions on lines 089-094 different reasons for these biases. However, they result in way more complex biases.\n- **Limited Discussion on More Complex Non-Conformity Scores**: While the paper acknowledges that more complex non-conformity scores may require different approaches, it does not explore these in-depth, potentially limiting the generality of the findings for more advanced CP methods.\n- **Limited Synthetic Experiment Settings:** The paper only evaluates experiments where $n$ is large, and the data is generated from symmetrical distributions. However, this is problematic because the strong point of the symmetric adjustment is that it can better leverage asymmetries (skewness) of the aleatoric uncertainty distribution (noise distribution on the label).\n- **Wrong Claims:** Your statement on line 266 is incorrect. When the number of samples is large and no bias is present, the length of prediction intervals generated by a symmetric adjustment is approximately equal to the ones generated from asymmetric adjustments. See the above point.\n- **Section 3.1 is too bloated:** Given your bias, you could just take the mean of the calibration set's error, to retrieve it." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please have look at the weakness section." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Practical Consideration of CP Efficiency: The authors address the critical issue of efficiency in conformal prediction (CP) sets, specifically focusing on the size of prediction intervals, which is relevant in real-world applications. By exploring the impact of bias on CP intervals, the authors attempt to improve the accuracy and reliability of predictions, which is essential for high-stakes tasks.\n\nExperiments: The paper demonstrates multiple real-world experiments, such as sparse-view CT reconstruction and weather forecasting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates how bias, defined as the systematic deviation from ground truth, affects Conformal Prediction (CP) intervals in regression tasks. CP is used for uncertainty quantification in machine learning models. The authors focus on two adjustment methods for CP intervals: symmetric (where both sides of the interval are adjusted equally) and asymmetric (where adjustments can be unequal). Through theoretical and empirical analyses, they show that:\n1. Symmetrically adjusted intervals increase in length by 2|b|, where b is the bias.\n2. Asymmetrically adjusted intervals are unaffected by bias.\n3. Under certain conditions, asymmetric intervals are tighter than symmetric ones.\nTheir findings suggest that asymmetric intervals maintain their \"tightness\" even under biased predictions, unlike symmetric intervals that inflate in length. These conclusions are validated with real-world tasks in CT reconstruction and weather forecasting, highlighting the potential for more bias-robust machine learning systems." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Inconsistent Notations and presentation: The paper suffers from unclear and inconsistent notations. For instance, in equation (1), it's not explicit which bias is being addressed --- it is not clear how the randomness in the ground truth is being dealt with. Additionally, \\hat{Y_i^b} is ambiguously defined; it is not clear whether these quantities are scalars or they are allowed to be a set of values. For instance, in the first paragraph of Section 3 the Y_i is treated as scalars and in the second paragraph, they use Y_i = {Y_{ij}} as a vector. Besides, the score definition proposed by Romano et al.[1] is presented inaccurately, which raises concerns about the foundation of the theoretical analyses. The overall readability of the paper is not good. \n\nLimited Insight from Corollary: While Theorems 1 and 2 seems a bit straightforward and is a direct consequence of the linear assumptions. The utility of Corollary 3.1 is not clear, the RHS of eq. (6) is independent on b, and the LHS is dependent on b? It is not clear how the authors are setting L_{asymb} \\leq L_{sym} in the proof. \n\nMissing Theoretical Justification for the Algorithm: The authors do not provide a theorem guaranteeing that the proposed algorithm retains the validity of the coverage guarantee for CP intervals. This omission is significant as it leaves a gap in understanding whether the method meets one of CP's fundamental requirements.\n\nLack of Coverage Probability in Simulations: The simulations fail to report the coverage probability, which is a critical metric for evaluating the reliability of proposed algorithm. This weakens the experimental validation of the proposed methods.\n\nComparison with related work: There are some related work which also aims to reduce the length of the conformal intervals [2,3]. It would be nice to compare the contribution of this work with these two related works. \n\n[1.] Romano, Yaniv, Evan Patterson, and Emmanuel Candes. \"Conformalized quantile regression.\" Advances in neural information processing systems 32 (2019).\n\n[2.] Xie, Ran, Rina Foygel Barber, and Emmanuel J. Candès. \"Boosted Conformal Prediction Intervals.\" arXiv preprint arXiv:2406.07449 (2024).\n\n[3.] Liang, Ruiting, Wanrong Zhu, and Rina Foygel Barber. \"Conformal prediction after efficiency-oriented model selection.\" arXiv preprint arXiv:2408.07066 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Conformal prediction intervals computed using asymmetric adjustments remain tight and valid when predictions are biased, while conventional symmetric adjustments inflate with increasing bias." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024regression,\ntitle={Regression Conformal Prediction under Bias},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v8RDgaEtE2},\nnote={under review}\n}" }, "abstract": { "value": "Uncertainty quantification is crucial to account for the imperfect predictions of machine learning algorithms for high-impact applications. Conformal prediction (CP) is a powerful framework for uncertainty quantification that generates calibrated prediction intervals with valid coverage. \nIn this work, we study how CP intervals are affected by \\emph{bias} -- the systematic deviation of a prediction from ground truth values -- a phenomenon prevalent in many real-world applications.\nWe investigate the influence of bias on interval lengths of two different types of adjustments -- symmetric adjustments, the conventional method where both sides of the interval are adjusted equally, and asymmetric adjustments, a more flexible method where the interval can be adjusted unequally in positive or negative directions.\nWe present theoretical and empirical analyses characterizing how symmetric and asymmetric adjustments impact the \"tightness\" of CP intervals for regression tasks. \nSpecifically for absolute residual and quantile-based non-conformity scores, we prove: 1) the upper bound of symmetrically adjusted interval lengths increases by $2|b|$ where $b$ is a globally applied scalar value representing bias, 2) asymmetrically adjusted interval lengths are not affected by bias, and 3) conditions when asymmetrically adjusted interval lengths are guaranteed to be smaller than symmetric ones.\nOur analyses suggest that even if predictions exhibit significant drift from ground truth values, asymmetrically adjusted intervals are still able to maintain the same tightness and validity of intervals as if the drift had never happened, while symmetric ones significantly inflate the lengths. \nWe demonstrate our theoretical results with two real-world prediction tasks: sparse-view computed tomography (CT) reconstruction and time-series weather forecasting. Our work paves the way for more bias-robust machine learning systems." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Conformal Prediction", "Bias", "Uncertainty Quantification" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/71f4af246309326189286577b71558fb73aca9da.pdf" }, "presentation": null, "primary_area": { "value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Regression Conformal Prediction under Bias" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v8qABSeeKO
MMKE-Bench: A Multimodal Editing Benchmark for Diverse Visual Knowledge
main
Active
Multimodal knowledge editing; Large multimodal model; Benchmark
datasets and benchmarks
5;5;6;6
4;4;3;3
2;2;2;3
2;2;2;3
2;3;2;3
5.5
3.5
2.25
2.25
2.5
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In line 776, “counterfactual editing” is mentioned. According to the description in lines 776-781, modifying certain facts constitutes counterfactual editing. Is this definition accurate?\n2. The case study in Section 5.3 could include more analysis. Figure 5 and Figure 6 only present a relatively simple entity editing example (person-related information) and do not showcase complex cases of visual semantic editing or user-specific knowledge editing. It’s recommended to select representative and challenging cases or specific types of cases in visual semantic editing or user-specific knowledge editing. \n3. The superiority of IKE has been demonstrated in many previous works. What new perspective does your benchmark provide? Section 5.3 only states that IKE performs better than FT-LLM without explaining why IKE has stronger generalization and transferability. It would be helpful to explain the reasons for the performance differences across methods based on the proposed benchmark. It is suggested to provide a detailed analysis of how IKE's performance on this new benchmark differs from its performance on previous benchmarks, and what these differences reveal about the nature of multimodal knowledge editing tasks.\n4. This benchmark is designed to uncover issues that other benchmarks cannot detect. Could you provide some examples or data to demonstrate that your benchmark is indeed more challenging and more valuable for improving models compared to other benchmarks?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The benchmark proposed in this paper uses knowledge represented in free-form natural language, making it more applicable to real-world scenarios. In addition to traditional visual entity editing, the benchmark incorporates visual semantic editing and user-specific editing, allowing for a more comprehensive evaluation of model editing capabilities.\n2. The paper provides a detailed description of the dataset construction process, offering valuable insights and methodologies for data collection and structuring.\n3. The experiments conducted on the benchmark cover a wide range of model editing methods across different types of knowledge editing tasks, resulting in an extensive and insightful evaluation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a benchmark for knowledge editing in multimodal large models, specifically targeting knowledge represented in free-form natural language. The benchmark focuses on tasks such as visual entity editing, visual semantic editing, and user-specific editing. The paper provides a detailed description of the data collection and construction process for this benchmark. Various model editing methods were evaluated on this benchmark using models like BLIP-2, MiniGPT-4, and LLaVA 1.5, revealing limitations in existing approaches." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I’m not sure if the workload is sufficient. If this were solely a Dataset & Benchmark Track submission, the workload would be appropriate. However, as a long paper submission to ICLR, it may require additional technical contributions. For instance, adding theoretical analysis to explain why existing methods perform poorly in multimodal knowledge editing could provide new perspectives for improving this area. Therefore, it’s recommended to include an in-depth analysis on why current methods underperform in certain aspects relevant to the proposed benchmark. It is suggested to provide related theoretical analysis or technical contributions that you believe would enhance the paper's suitability for ICLR. \n\n2. I didn’t find evidence in the experimental or case study sections demonstrating the necessity of this benchmark. This is not to question the value of your work, but providing examples would strengthen the case. Additionally, clarifying which scenarios would particularly benefit from using this benchmark for evaluation would be helpful. It’s recommended to add more evidence highlighting the ‘differences’ and ‘necessity’ of this benchmark. I suggest you provide comparative analyses with existing benchmarks or specific real-world scenarios where this benchmark would be particularly valuable." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "(1) Would it be more beneficial to construct knowledge editing samples using incremental, real-world updates? Although challenging, creating samples based on real-world updates, such as player transfers or recent match results sourced from news articles, could enhance dataset relevance. Using real, evolving information would help models learn to handle dynamic knowledge updates and might improve their applicability in practical knowledge editing scenarios.\n\n(2) What impact did human verification have on the LLM-generated descriptions? An analysis or examples of descriptions before and after human verification would help demonstrate how much human intervention improved the dataset's consistency and accuracy.\n\n(3) Is there a comparison of Visual Entity Editing data quality between MMKE-Bench and prior datasets (e.g., MC-MKE, VLKEB)? While Table 1 highlights MMKE-Bench’s increased task diversity with the addition of Visual Semantic Editing and User-Specific Editing, it would be useful to understand how MMKE-Bench compares to previous datasets in terms of data quality for Visual Entity Editing. Such a comparison could clarify whether MMKE-Bench offers improvements in data quality alongside task diversity.\n\n(4) For additional questions, please refer to the Weaknesses section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "(1) Diverse Task Setup: MMKE-Bench covers three distinct knowledge editing tasks, from entity-level editing to more complex user-specific knowledge editing. This provides a comprehensive tool for evaluating multimodal models' ability to update knowledge and handle personalized information.\n\n(2) Free-Form Natural Language Descriptions: Unlike traditional triple-based representations, this benchmark uses natural language descriptions to represent knowledge items, enabling models to engage in editing tasks in a more realistic scenario. The free-form descriptions combined with image data make the tasks closer to the complexity of real-world semantics.\n\n(3) Extensive Evaluation Using State-of-the-Art Multimodal Models: The paper evaluates MMKE-Bench on prominent multimodal models such as BLIP-2, MiniGPT-4, and LLaVA-1.5. These models represent the cutting edge in the multimodal field, making the experimental results widely relevant and reflective of current model capabilities in knowledge editing tasks.\n\n(4) Systematic Experimental Analysis: The paper provides a thorough evaluation of various existing knowledge editing methods, including FT-LLM, KE, MEND, SERAC, and IKE, offering a comprehensive performance baseline for each task type. This analysis provides valuable insights into how different methods perform across the tasks presented in MMKE-Bench." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces MMKE-Bench, a multimodal knowledge editing benchmark dataset aimed at addressing the current gap in resources for evaluating large multimodal models (LMMs) in knowledge editing tasks. MMKE-Bench is largely constructed using counterfactual samples to enhance the robustness of model evaluation and to examine their ability to perform knowledge editing across varied and challenging scenarios. The dataset includes three types of tasks: visual entity editing, visual semantic editing, and user-specific knowledge editing. Each task is represented through free-form natural language descriptions combined with images, generated by large language models (LLMs) and verified through human annotation to ensure consistency. The paper evaluates several state-of-the-art multimodal models, including BLIP-2, MiniGPT-4, and LLaVA-1.5, providing insights into the strengths and limitations of current models across different knowledge editing tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1) Figures 1 and 2 Could Benefit from Improved Clarity and Accessibility: Figures 1 and 2 could be refined to enhance clarity and accessibility for a broader audience. In Figure 1, the example used is soccer-related, which may not be immediately understandable to readers unfamiliar with the sport. A more universally recognizable example, such as common objects or activities, could make the data construction process clearer. For Figure 2, the visual design could better distinguish the four dataset construction steps; currently, it’s difficult to determine which modules belong to each step. Using distinct colors, numbering, or borders to separate steps would help readers follow the process more intuitively.\n\n(2) Limitations of Counterfactual Data for Real-World Applications: The counterfactual approach used to construct MMKE-Bench may lead to a distribution that differs from real-world data. While counterfactual samples aid in testing robustness, they represent hypothetical rather than naturally occurring scenarios. As a result, models fine-tuned on this dataset might learn patterns specific to these counterfactual cases, potentially reducing their effectiveness in real-world knowledge editing tasks. A comparison of model performance on both counterfactual and real-world data would provide valuable insights.\n\n(3) Lack of Empirical Analysis on the Impact of Human Verification: The paper mentions that human verification was conducted to improve the consistency of LLM-generated descriptions, but it does not provide examples or quantitative comparisons to illustrate the impact of this verification process. Including a before-and-after analysis would strengthen the case for the added value of human verification." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "As mentioned in the weaknesses, the similarity of editing visual entity and semantic editing, the user-specific editing scenario and T-Loc test warrant further exploration.\n\nGiven the modest amount of training data, is the dataset volume adequate for methods that require training, like serac or mend? Additionally, how is the IKE method adapted to LMM editing? Which examples are used in context? Is target knowledge incorporated within the context, and do you include image-based examples in this context?\n\nIn Section 5.2.1, there is a claim that “5) Modern LMMs excel in producing and applying edited knowledge. For reliability, generalization, and portability evaluations, LLaVA-1.5 outperforms BLIP-2 and MiniGPT-4. This improved performance can be attributed to its larger model size and better instruction-following capability.” This raises two questions: First, is LLaVA-1.5 indeed larger than MiniGPT-4, as both use a 7B LLM, and MiniGPT-4's vision encoder appears larger? Second, this statement is not directly related to the benchmark’s core focus, which is to compare editing methods rather than models.\n\nIn Section 5.2.2, further clarification is needed regarding the meaning of “user number” and “allowed maximum items.” Additionally, what is the precise gap between editing and testing in user-specific editing, and why is the gap in the two visual editing tasks similar (1, 3, 6, 10) without larger gap?\n\nTypo:\nFigure 2, bottom-right: \"G-Rel.\"\nTable 2: \"Entity Entity Editing\"\nTable 6: SERAC GAP 3 T-Rel, missing decimal point" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The use of free-form natural language as input for knowledge editing tasks is a notable strength, enhancing flexibility and making the approach adaptable. The clarity of the writing aids comprehension, and the experimental setup is well-documented. Additionally, the benchmark spans diverse data sources and entity types, allowing for broad applicability across different tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work focuses on knowledge editing in large multimodal models and introduces a new benchmark. It presents three distinct types of editing tasks: visual entity editing, visual semantic editing, and user-specific editing, using free-form natural language as input data for these edits. The benchmark also diverges from previous versions by removing the T-Gen test and adding a T-Rel test. The evaluation includes five editing methods across three large multimodal models (LMMs), effectively validating the benchmark's dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There is some overlap between visual entity editing and visual semantic editing, as both tasks involve understanding image content, which could blur the distinction between these editing types. Additionally, the user-specific editing scenario may lack practicality. In real-world applications, database or memory-based search might be more effective than training user-specific information for each user to achieve personalization in LLMs or LMMs.\nRegarding the T-Loc test, there’s room for improvement. The results are near 100, suggest that the randomly sampled T-Loc questions are relatively easy for methods such as SERAC, FT-Alignment, and MEND. Introducing more challenging cases could enhance the evaluation’s robustness. For instance, collect similar but harder test cases, either through web crawling or using LLM-generated content (e.g., from GPT) and verify them. This could improve the test’s effectiveness." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No Ethics Concerns" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "As mentioned in the weaknesses.\n\nFrom figure 1, there is no T-Gen test in MMKE-Bench, and you add a T-Rel test. Why do you make this change?\n\nAdditionally, how do you adapt each method to the lmm editing?\n\nFigure 4: why I-Gen not evaluated for MMEdit?\n\nSection 5.2.2 requires additional clarification on the terms “user number” and “allowed maximum items.” Further, what about the mend method in your sequential editing setting? And are the gaps substantial enough for the test? In LLM knowledge editing studies and the referenced vlkeb work, this gap can be larger, reaching 100 or more.\n\nThe analysis in line 425-426 only explains possible reason for serac, leaving out mend.\n\nLine 450, “visual knowledge” can be ambiguous, and the reliability seems to be not lower as stated.\n\nLine 466, how do you draw the conclusion “parameter-based methods are better at applying edited knowledge to new contexts”?\n\n\nWriting issue:\n\ndouble quotes in line 324-327 and elsewhere: please pay attention to how to type correct left quotation mark in latex.\n\nFigure 1: what is the difference between original and editing knowledge of user-specific editing? They are same that lead to confusion, which can be improved for btter clarity.\n\nFigure 2: Should the \"G-Rel.\" change to “T-Rel”? And the two question in the last box (bottom right) are same. And placing “hint” there can be confusing, as if the hint itself is part of the input.\n\nLine 355: you use “MLLMs” here, but in other places, you use LMMs\n\nTable 2: \"Entity Entity Editing\"\n\nTable 6: missing decimal point in SERAC results, GAP 3 T-Rel column" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Utilizing free-form natural language as input for knowledge editing tasks stands out as a key advantage, offering greater flexibility and adaptability to various contexts. The data collection process is robust and comprehensive, and the benchmark supports both individual and sequential editing tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This research investigates knowledge editing within large multimodal models, introducing a novel benchmark for evaluation. It defines three specific editing tasks: modifying visual entities, altering visual semantics, and implementing user-specific adjustments, all using natural language as the input format. The benchmark is validated through an assessment of five different editing techniques across three large multimodal models, effectively confirming the dataset's applicability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The training dataset size seems to be small, which might not be enough for mend/ serac that need training. This could potentially lead to lower performance of such methods.\n\nThere is potential to enhance the T-Loc test. The current results, which are close to 100, indicate that the randomly selected T-Loc questions may be relatively simple for existing methods. Strengthening the evaluation could involve introducing more challenging cases. For instance, given that GPT was used in data collection, generating analogous but more difficult test cases and validating them could increase the test's effectiveness and robustness.\n\nThe division of the three tasks lacks clarity. Both visual entity editing and visual semantic editing require image comprehension and retention of the target knowledge provided in the text. While they may involve different data sources, they essentially fall under the same task, visual question answering. Similarly, user-specific editing could be seen as an application scenario, but it still aligns with the core task of VQA, making no real difference in “task type.” \n\nAdditionally, I question about if this user-specific editing approach is practical. In real-world applications, creating personalized databases for each user might be more efficient and effective than modifying the model or embedded knowledge directly.\n\nAnd the task generalization might not be persuasive using only one case. If you want to prove this, more comprehensive experiments and evaluations should be conducted." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose MMKE-Bench, a challenging benchmark for evaluating diverse semantic editing in real-world scenarios." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024mmkebench,\ntitle={{MMKE}-Bench: A Multimodal Editing Benchmark for Diverse Visual Knowledge},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v8qABSeeKO},\nnote={under review}\n}" }, "abstract": { "value": "Knowledge editing techniques have emerged as essential tools for updating the factual knowledge of large language models (LLMs) and multimodal models (LMMs), allowing them to correct outdated or inaccurate information without retraining from scratch. However, existing benchmarks for multimodal knowledge editing primarily focus on entity-level knowledge represented as simple triplets, which fail to capture the complexity of real-world multimodal information. To address this issue, we introduce MMKE-Bench, a comprehensive **M**ulti**M**odal **K**nowledge **E**diting Benchmark, designed to evaluate the ability of LMMs to edit diverse visual knowledge in real-world scenarios. MMKE-Bench addresses these limitations by incorporating three types of editing tasks: visual entity editing, visual semantic editing, and user-specific editing. Besides, MMKE-Bench uses free-form natural language to represent and edit knowledge, offering a more flexible and effective format. The benchmark consists of 2,940 pieces of knowledge and 7,229 images across 110 fine-grained types, with evaluation questions automatically generated and human-verified. We assess five state-of-the-art knowledge editing methods on three prominent LMMs, revealing that no method excels across all criteria, and that visual and user-specific edits are particularly challenging. MMKE-Bench sets a new standard for evaluating the robustness of multimodal knowledge editing techniques, driving progress in this rapidly evolving field." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multimodal knowledge editing; Large multimodal model; Benchmark" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ff6859b76de179de9c130119f93d6f3dc9f2f10c.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MMKE-Bench: A Multimodal Editing Benchmark for Diverse Visual Knowledge" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v9CDpLpjiE
Visual-O1: Understanding Ambiguous Instructions via Multi-modal Multi-turn Chain-of-thoughts Reasoning
main
Active
Understanding ambiguous instructions;large multimodal model;chain-of-thoughts;multimodal
applications to computer vision, audio, language, and other modalities
5;5;6
3;4;4
3;2;3
2;3;3
2;3;3
5.333333
3.666667
2.666667
2.666667
2.666667
0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- For stronger VLMs (e.g., GPT-4o), I don't quite get the motivation for authors to adopt the \"instantial experience\" approach instead of the \"empirical experience\" approach. Intuitively, allowing VLMs to reason and explicitly output the clear instructions with ambiguities removed should almost always improve their performance, regardless of the ability of the VLMs themselves. I also didn't find an ablation study in the paper to support why we need the \"instantial experience\" approach for stronger VLMs.\n- Additionally, per my comments in the review, I'd encourage authors design better prompts such that the generated instructions in their \"empirical experience\" approach do not defeat the purpose of an actual \"instruction\"." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Context and instruction ambiguity is very prevalent in language and vision-language applications. Handling these cases is an important research problem. The proposed approach presents a promising way to improve the accuracy of VLM's \"best guess reasoning\" under ambiguous instructions.\n- The paper is overall well-written, with clear methodology and extensive experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes Visual-O1, an approach that improves VLM's reasoning performance given ambiguous language instructions. For VLMs with stronger reasoning ability, the paper proposes an \"instantial experience\" approach where multi-turn multi-modal CoTs are generated to reason about the instruction, and the model subsequently directly outputs final answer conditioned on such reasonings. For VLMs with weaker reasoning ability, the paper proposes an \"empirical experience\" approach where after multi-modal CoTs are generated, explicit and clear instructions are generated before the model outputs the final answer. The authors demonstrate the effectiveness of their approach in referring image segmentation and visual question answering benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Table 6 indicates that the effectiveness of the proposed approach and the results of the paper may be vulnerable to noise, and the improvements could lack statistical significance. Specifically, as the budget limit increases from 1 to 5, the success rates are \"55.87, 54.16, 57.38, 55.27, 55.56,\" showing no clear trend of performance improvement with respect to budget limit. For instance, the fact that \"budget limit = 3\" outperforms \"budget limit = 2\" by 3% and \"budget limit = 4\" by 2% could simply reflect evaluation noise.\n- The generated \"clear instruction\" in Figure 3 (which correspond to authors' \"empirical experience\" approach) appears to directly answer the initial question, rather than serving as an actual \"instruction\" that enables the LLM to explore various objects, select the relevant one, and then provide the final answer. Thus, authors' approach seems to undermine the purpose of an \"instruction.\" For instance, for the ambiguous prompt `white bear turned slightly`, a more suitable instruction with ambiguity removed would be, `identify the bear that is predominantly white with subtle color variations from white`." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Could you provide some further information to explain the difficulty of the ambiguous dataset? For example, how humans (might) perform on this dataset?\n2. How are the Chain-of-Though and FuDD baselines implemented?\n3. An example of $\\mathcal{A}_\\text{ins}$ is provided in the appendix. If I understand the method correctly, it should be composed of the alternating appearance of reasoning and reflection. Are the sentences in gray given by reflection? If so, is there an example of the full trajectory?\n4. Analysis of Visual-O1's failure modes will help a lot, as discussed in the weaknesses section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper proposed a prompting method that seamlessly applies to both high-intelligence and general-intelligence models within the same framework. This adaptability indicates that Visual-O1 is not limited to a specific model type and can scale across different levels of model intelligence, which enhances its utility for a wider range of applications and users.\n2. The authors presented extensive experimental results that include a variety of ablation studies and model comparisons. These results strengthen the credibility of their claims and provide thorough evidence of the framework's effectiveness. Notably, Visual-O1 not only improves performance on ambiguous instructions datasets but also enhances results on general datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes Visual-O1, a multimodal multi-turn Chain-of-Thought reasoning framework to deal with ambiguous natural language instructions in two typical vision-language tasks: referring image segmentation and visual question answering. Visual-O1 builds\ninstance-specific experience for high-intelligence models or creates general experience for any ambiguous instructions for general-intelligence models. Experiments show that Visual-O1 improves the performance on both the ambiguous datasets and the general datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although Visual-O1 shows clear improvements over baselines, the absolute success rates and IoU scores (on the ambiguity dataset) are still not entirely satisfactory. This gap suggests that there are underlying limitations to the model’s ability to handle certain types of ambiguous instructions. The paper could be strengthened if the authors provided a more detailed examination of failure cases (e.g., identifying specific patterns in instructions that remain challenging to disambiguate). Without this analysis, Visual-O1's readiness for real-world applications remains in question.\n2. The paper tends to overemphasize its successes by frequently using terms like \"significant,\" even when sometimes the improvements are marginal (e.g., success rate gains of less than 3%). This could be misleading, especially considering that the ambiguous instructions dataset is not large enough to robustly support such claims. Furthermore, the results do not report averages over different random seeds, which raises questions about the stability and generalizability of the improvements. \n3. Minor issues or typos: The term $x_\\text{rsn}$ should be $x_\\text{rfl}$ in Eq. (6). The word \"botlle\" should be \"bottle\" in Figure 3." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can you write the exact GPU settings for Table M?\n2. How does the author interpret that the performance saturates quickly with the budget?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper's proposed direction of Visual-O1 towards instructional QA is an interesting direction.\n2. By integrating the chain of thought into the pipeline, the model can improve the segmentation result of referring image segmentation, and the visual QA result. The performance gain on visual QA is larger than RIS, and the performance gain on ambiguous cases is higher than general cases, which indicates that the multi-round and multimodal inference needs more on the harder case.\n3. The additional overhead of the visual chain of thought is limited for both training and evaluation stages." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method for visual o-1 that is similar to language o1 that increases the computation budget on test-time. The work is proposed to highlight the potential of artificial intelligence to work like humans in the real world, especially when the instruction is ambiguous for the instructional model. This is achieved by multi-turn reasoning with chain of thought. \n\nThey have separated the multi-round reasoning process for high-intelligence model and general intelligent model, where it tends to have more round interaction when the model has a higher intelligence level.\n\nThey have evaluated their result on visual-QA, and RIS in comparison with the state-of-the-art model and shows reasonable improved on both segemtantion and visual-QA." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. To be completely honest, although the performance gain shows statistically significant improvement, however, the improvement is marginal, which indicates that the chain-of-thought works.\n2. The author lacks examples when we actually need visual CoT (As the cipher example in o1).\n3. As shown in Table.6, I actually think the budget curve is confusing, the performance fluctuates for Acc and BLEU, and saturate w/ 3 budget, this indicates that the designed algorithm is not sophisticated enough for scaling up.\n4. They propose the empirical solution for less-intelligent model." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose Visual-O1, a multi-modal multi-turn reasoning framework that enhances high-intelligent and general-intelligent models' understanding of ambiguous instructions in multi-modal tasks by simulating human reasoning." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024visualo,\ntitle={Visual-O1: Understanding Ambiguous Instructions via Multi-modal Multi-turn Chain-of-thoughts Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v9CDpLpjiE},\nnote={under review}\n}" }, "abstract": { "value": "As large-scale models evolve, language instructions are increasingly utilized in multi-modal tasks. Due to human language habits, these instructions often contain ambiguities in real-world scenarios, necessitating the integration of visual context or common sense for accurate interpretation. However, even highly intelligent large models exhibit significant performance limitations on ambiguous instructions, where weak reasoning abilities of disambiguation can lead to catastrophic errors. To address this issue, this paper proposes Visual-O1, a multi-modal multi-turn chain-of-thought reasoning framework. It simulates human multi-modal multi-turn reasoning, providing instantial experience for highly intelligent models or empirical experience for generally intelligent models to understand ambiguous instructions. Unlike traditional methods that require models to possess high intelligence to understand long texts or perform lengthy complex reasoning, our framework does not significantly increase computational overhead and is more general and effective, even for generally intelligent models. Experiments show that our method not only significantly enhances the performance of models of different intelligence levels on ambiguous instructions but also improves their performance on general datasets. Our work highlights the potential of artificial intelligence to work like humans in real-world scenarios with uncertainty and ambiguity. We will release our data and code." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Understanding ambiguous instructions", "large multimodal model", "chain-of-thoughts", "multimodal" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2c9e0c18094ed8c13dc367d37d60e112911e9aa5.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Visual-O1: Understanding Ambiguous Instructions via Multi-modal Multi-turn Chain-of-thoughts Reasoning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v9EjwMM55Y
UniMatch: Universal Matching from Atom to Task for Few-Shot Drug Discovery
main
Active
Few-shot molecular representation learning;maching learning
applications to physical sciences (physics, chemistry, biology, etc.)
5;6;8;8
2;4;3;3
2;3;4;3
2;3;3;3
3;4;2;4
6.75
3
3
2.75
3.25
0.272166
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Do AUCPR and AUPRC refer to the same metric? If so, I sugget using a consistent abbreviation throughout the paper." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. UniMatch is the first method that integrates multiple levels of molecular structures, demonstrating outstanding performance across various tasks and sigificantly contributes to the filed of drug discovery.\n2. This paper is clearly-written and accompanied supported by well-designed figures, well done!" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose UniMatch, a dual matching framework that integrates hierarchical molecular matching with meta-learning, significantly improves the performance and generalization in molecular property prediction tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I suggest further investigating the interpretability of the model, particularly with respect to the multi-level representations. Some gradient-based methods (e.g. DeepLIFT) could be used to reveal the importance of these features." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "see weaknesses" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The concept of modeling molecular structures across multiple levels is promising.\n2. The manuscript is well-written and easy to read.\n3. Extensive experiments across diverse datasets demonstrate that the model consistently outperforms baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents UniMatch, a framework designed to address data scarcity in few-shot drug discovery. It uses a dual matching approach: explicit hierarchical molecular matching, which captures information from the atomic level to full molecular structures, and implicit task-level matching via meta-learning to enable effective generalization across tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The related work section is somewhat lengthy. It might be more effective to move it to the end of the manuscript or condense it.\n2. While the rationale for modeling hierarchical structures is solid, I have concerns about the use of mean pooling. This approach may lead to a predominantly molecule-level representation, potentially losing crucial atomic and substructural details. What do the authors think?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. When evaluating against MoleculeNet tasks, is there a reason why the QM tasks were not considered and only the biophysical properties are included?\n2. Do you have any ideas as to whether more explicitly introducing an inductive bias to the different levels of attention could be helpful?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The paper tackles a valuable domain-specific problem, which high societal value, with a method that can be reasonably adapted to different domains.\n2. The evaluation is extensive and clearly demonstrates the advantages of the method.\n3. The ablation study is complete and clearly demonstrates the contribution of every component in the method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This study introduces a new framework for molecular representation specially aimed at multi-task few-shot learning. In this context for each task there is a small support subset of molecules with known labels ($S_T$) and a query subset ($Q_T$) with unknown labels.\n\nThe contributions of the study can be broken down into the following components:\n\n1. Matching at different levels of the molecular graph. This is achieved by an attention mechanism where the keys are the molecular features of the $S_T$, the molecular representations of $Q_T$ are used as query, and as value, the labels of the $S_T$. This matching attention mechanism is not novel. The novelty resides on repeating this matching with the representations at each layer of the GNN (thus effectively capturing information at different structural levels of any given molecule) and concatenating the resulting values that are then passed through an MLP to predict the final target value.\n2. Meta-learning. An additional implicit matching mechanism is used to learn the similarity between different tasks and how to leverage the multi-scale representations from one task to another. This is achieved by an inner loop where the task-specific adaption parameters are updated using the matrix that captures the similarity between tasks and an outer loop where this similarity matrix is updated.\n\nThe results obtained in benchmark datasets clearly show that the benefits of this approach when compared to current state-of-the-art alternatives. The ablation study also clearly supports the contribution of all the components of the method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Major weaknesses\n---\n1. The technical description of the dual matching mechanism is hard to follow and it is a disservice to an otherwise solid work.\n\n\nMinor weaknesses\n---\n_(These are minor points that do not have a direct bearing in my evaluation of the paper, but I think would improve its quality)_\n\n1. In line 075-077 and Figure 1, one of the examples provided of the importance of the multi-level representation of molecules from atomic, to whole molecule serve their purpose as an illustration, but it is not chemically correct. Hydrogen ions do not exist within the molecule, they would be referred to as hydrogen atoms. Though it is true that Hydrogens ions in a solution determine its acidity. In a molecule, the hydrogen atoms are not the determinants of acidity, but rather the atoms they are attached to. So in the first examples, the acidity of those molecules is not determined by the Hydrogen atoms but by the chlorine and oxygen atoms; and their electronegativity (or ability to hold onto the Hydrogen). Still the illustration is accurate, but the atom that should be highlighted is not the Hydrogen. In the case of the right example, it is arguable that the acidity is not only caused by the oxygen, but rather the whole substructure of H-O-S.\n2. In line 409, when describing the FS-Mol dataset, it is described as \"[a benchmark] for macromolecules (i.e., proteins)\". This is not accurate, the benchmark measures the ability of small molecules (the one that serve as input for this system) to bind to proteins. In other words, the tasks are whether the molecules can bind to specific proteins. This is an important distinction because due to their size, modelling proteins with the approach presented in this paper would be computationally infeasible." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. **Generalization to different molecular structures**:\nIt would be beneficial to understand how well the method generalizes to diverse molecular structures, as this is crucial for practical applications. For example, testing under a scaffold split for support and query sets could provide insights. The paper mentions the method's failure on MUV. Is this due to a lack of structure generalization? The phrase \"severe distribution imbalance\" is not clearly explained. What does it refer to specifically? And why are other baselines able to overcome this issue?\n\n2. In Figure 2, the query molecule appears identical to one in the support set. In your implementation, can the query molecule overlap with the support set? \n\n3. The numbers in parentheses following the datasets in Table 1, which appear to represent the number of tasks, do not seem to be explained in the main text." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The writing is clear, with well-crafted figures that enhance understanding. It also offers a discussion of limitations and detailed supplementary explanations.\n\n2. The experiments are quite extensive. The research examines the universality of the method across different network structures. It also conducts evaluations on three benchmarks. The results are convincing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work presents UniMatch for molecular property prediction in the few-shot learning scenario. It combines explicit hierarchical molecular matching with implicit task-level matching via meta-learning. The effectiveness of the few-shot learning and multi-task generalization is validated on the MoleculeNet, FS-Mol, and Meta-MolNet benchmarks. An ablation study and visualizations are provided to demonstrate the importance of hierarchical molecular matching and task-level matching." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Timeliness**:\n - The baselines and related work employed in the paper lack more recent works, especially those of recent 1 or 2 years. \n\n- The pretraining model, Pre-GNN (Hu et al., 2020), is relatively old in the context of molecular pretraining. Recent years have seen the emergence of far more powerful models. I encourage authors to consider integrating more up-to-date pretraining models to showcase the significance and practical usage of their approach. \n\nBy simple searching, I find more recent related works on Hierarchical molecular representation learning methods like \"UniCorn: A Unified Contrastive Learning Approach for Multi-view Molecular Representation Learning\" (ICML 2024) and \"Adapting Differential Molecular Representation with Hierarchical Prompts for Multi-label Property Prediction\" (Briefings in Bioinformatics, 2024).\n\n\n\n2. **Novelty**:\n\nThe hierarchical molecular representation extraction concept is commonly seen in the literature. Additionally, the paper employs standard attention and meta-learning methods. I'm concerned that this may not meet the novelty standards of top conferences." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce HierMatch, which performs matching across multiple levels, from atoms to tasks, to enhance molecular property predic- tions in few-shot learning scenarios" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024unimatch,\ntitle={UniMatch: Universal Matching from Atom to Task for Few-Shot Drug Discovery},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v9EjwMM55Y},\nnote={under review}\n}" }, "abstract": { "value": "Drug discovery is crucial for identifying candidate drugs for various diseases. However, its low success rate often results in a scarcity of annotations, posing a few-shot learning problem. Existing methods primarily focus on single-scale features, overlooking the hierarchical molecular structures that determine different molecular properties. To address these issues, we introduce Universal Matching Networks (UniMatch), a dual matching framework that integrates explicit hierarchical molecular matching with implicit task-level matching via meta-\nlearning, bridging multi-level molecular representations and task-level generalization. Specifically, our approach explicitly captures structural features across multiple levels—atoms, substructures, and molecules—via hierarchical pooling and matching, facilitating precise molecular representation and comparison. Additionally, we employ a meta-learning strategy for implicit task-level matching, allowing the model to capture shared patterns across tasks and quickly adapt to new ones. This unified matching framework ensures effective molecular alignment while leveraging shared meta-knowledge for fast adaptation. Our experimental results demonstrate that UniMatch outperforms state-of-the-art methods on the MoleculeNet and FS-Mol benchmarks, achieving improvements of 2.87% in AUROC and 6.52% in ∆AUPRC. UniMatch also shows excellent generalization ability on the Meta-MolNet benchmark" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Few-shot molecular representation learning", "maching learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b31ca06b6fa52acecbb097324b12664fc7b1f717.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "UniMatch: Universal Matching from Atom to Task for Few-Shot Drug Discovery" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v9GwGQoOG5
Beyond Markov Assumption: Improving Sample Efficiency in MDPs by Historical Augmentation
main
Active
Deep reinforcement learning;Sample efficiency;State representation;Historical augmentation;Markov decision processes
reinforcement learning
3;5;5;6
4;4;3;3
2;2;3;3
2;2;3;3
3;3;2;3
4.75
3.5
2.5
2.5
2.75
-0.688247
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Beyond the simple examples, are there theoretical or numerical analysis showing sample complexity benefits with history augmentation?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Having an appropriate representation is important for RL agents. For problems with the Markov property, it is typically for a RL agent to consider only the current state as the state is known to be sufficient to make optimal decisions. The proposed idea to augment history to help the agent to improve its representation learning is an very interesting ideas and sounds promising from simple examples as discussed.\n\n- The proposed method shows decent performance in numerical experiments, and the effectiveness of history augmentation is numerically illustrated in the ablation study." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "For deep reinforcement learning, the paper proposes to augment the state with compressed historical to improve sample efficiency and performance. Some theoretical analysis provides optimality and convergence properties for the proposed state augmentation when certain conditions are satisfied. Numerical experiments show decent performance for the proposed method compared to some existing methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Although the paper provides some analysis for optimality and convergence properties of the proposed provided, these properties do not provide any insight into why the history augmentation helps representation learning. And there is no analysis on potential sample complexity reduction. Since it is assumed that the state is kept and completely uncompressed in the encoder output, most of the results are expected. It is likely that simpler arguments may be available by arguing that the original state-dependent optimal policy is also a feasible policy with the augmentation.\n\n- In both the analysis of Section 4.1 and the algorithm design in Section 4.2, it is not clear whether we non trivial augmentation is needed. For example, the analysis seems to completely go through when $f(s_{k, t}) = s_t$ in Section 4.1, and nothing seems to prevent the HA3C algorithm to ignore the history and ending up getting $z^{s^{k, t}_\\alpha} = s_t$.\n\n- Although the ablation study shows better performance with the proposed history augmentation, the improvement does not seem significant given those largely overlapping confidence areas. Additional experiments like showing how performance varies by varying the length of the history augmentation may provide some trends that could be more convincing.\n\n- Comparing to existing methods, the proposed seem to perform decently in the numerical experiments, but the improvement is not that significant especially compared with TD7. Without additional analysis on the quality of the learned representation, it is not clear if the performance benefits indeed come from the proposed history augmentation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- I do not quite understand the significance of the point demonstrated in Figure 5. Do I understand correctly? HA3C has more points in the red circle. This indicates that HA3C can reach the high-rewarding states more often (or robustly). This information seems a little duplicated to the training curves demonstrated in Figure 6 or Table 1.\n- In L171, should the formula depend on $s_t$ but not $t$ since we are talking about predicting $s_{t+1}$ from $s_t$ but not $t$.\n- The citation format is incorrect." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The overall idea is novel but sound. The proposed method is reasonable from an intuitive perspective." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper provides an interesting viewpoint on learning a policy that not only depends on the current state but also on the history in Markov decision processes. The key motivation is that, by conditioning on the history, the underlying pattern may be simpler than only conditioning on the current state. The Fibonacci example nicely demonstrates this point. Later, the authors identified two challenges when we want to learn a policy depending on the history: 1) How to ensure we learn a simple pattern based on the history? 2) How to avoid overfitting to the high-dimensional historical data? The core solution proposed by this paper is to learn two encoders - one is used to compress the history into a low-dimensional embedding and the other serves as a latent world model to predict the embedding of the next state based on the action." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While the Fibonacci example is persuasive, I do not quite understand the example provided in Figure 1. Why the causal function in Figure 1 (b) is simpler than the causal function in Figure 1 (a)? Or this is just a illustrative figure for historical augmentation but not for providing a solid sample? If this is the case, I would like to see a less artificial example to demonstrate on this point (depending on the history can lead to a simpler causal relationship).\n- After I go through the algorithmic design of HA3C, I feel this algorithm is closer to “representation learning for RL” such as Dreamer. (By the way, Dreamer lacks citation in L128.) HA3C essentially learns encoders that can compress the state (plus history) in to a latent space and learns an additional latent world model (g) that can predict the dynamics in this latent space. From this perspective, HA3C would better to also compare with other “representation learning for RL” baselines.\n- On the experimental results, the improvement of HA3C is marginal over the baseline algorithms on MuJoCo control tasks. From my point of view, this does not indicate that HA3C is ineffective. I think the benefit of HA3C relies on the structure of the problem: the causal relationship based on only the current state is complex but the causal relationship based on the history can be simple. MuJoCo tasks may not be good environment for HA3C. I strongly suggest the authors to find other tasks (or even artificial tasks to demonstrate the effectiveness of HA3C)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. While the proposed historically augmented states can theoretically improve actor-critic methods, could you provide more evidence that demonstrates their applicability to other existing RL methods, aside from the current used TD3?\n2. Is the input to HA3C images while the authors mention high-dimensional historical trajectories frequently?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The examples given in the Section 3 and Appendix B help readers understand the motivation of using historical information.\n2. The paper is generally well-organized, making it easy for readers to follow.\n3. The proposed method is shown to have strong empirical performance on Mujoco and DMC." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a new algorithm to improve sample efficiency in reinforcement learning by integrating historically augmented states, and presents a series of experiments conducted to validate the effectiveness of this algorithm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. To improve reproducibility, it would be beneficial to supplement the implementation details about the inputs and the parameters of networks such as CNNs used in the encoder.\n2. I believe the paper would read more easily after reorganizing the Appendix A and Appendix D, as abbreviations like SkD and MkD may be confusing for those unfamiliar with them.\n3. An in-depth analysis of the parameters k and N in the ablation study would greatly enhance readers' understanding of the algorithm. Additionally, I believe more analysis of the running time or the complexity would be helpful, for example, the impact of the parameters k and N on the running time.\n\nMinor comments\n1. Is there a mismatch between the Figure 2 and the corresponding description “the dimensionality reduction is only performed on $s_{k−1,t−1}$”?\n2. Is there a typo in the results for TD7 on HalfCheetah shown in Table 1 (the reward of 156325)? Additionally, the reward of 45074 for TD3+OFE on Walker2d should also be checked.\n3. Formatting: \n a) When the authors or the publication are not part of the sentence, the citation should be in parenthesis by using ‘\\citep{}’.\n b) The format for referencing figures should be consistent in the Section 3." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See above" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The perspective of causal relationships is interesting to understand why one should use historical information as additional inputs.\n2. The theoretical formulation is precise and comprehensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper posits that, even when operating under the Markov assumption, it is beneficial for policy formulation to take into account not only current states but also historical information. This is based on the assumption that single-step state transitions might have complex causal relationships. Introducing historical data could potentially simplify these causal relationships, making them easier for neural networks to learn. On this basis, a novel Reinforcement Learning (RL) algorithm named HA3C is proposed, which has demonstrated superior performance over other advanced algorithms, such as TD3 and TD7, in five MuJoCo control tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The improvement of HA3C over baselines on the five Mujoco tasks appears to be subtle rather than significant.\n2. It would be beneficial to devise a demonstrative environment and characterize the causal relationships, thereby facilitating a clear comparison between the two options." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper investigates if augmenting the states with their historical information can simplify the complex causal relationships in MDPs and thus improve the sample efficiency of DRL." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024beyond,\ntitle={Beyond Markov Assumption: Improving Sample Efficiency in {MDP}s by Historical Augmentation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v9GwGQoOG5},\nnote={under review}\n}" }, "abstract": { "value": "Under the Markov assumption of Markov Decision Processes (MDPs), an optimal stationary policy does not need to consider history and is no worse than any non-stationary or history-dependent policy. Therefore, existing Deep Reinforcement Learning (DRL) algorithms usually model sequential decision-making as an MDP and then try to optimize a stationary policy by single-step state transitions. However, such optimization is often faced with sample inefficiency when the causal relationships of state transitions are complex. To address the above problem, this paper investigates if augmenting the states with their historical information can simplify the complex causal relationships in MDPs and thus improve the sample efficiency of DRL. First, we demonstrate that a complex causal relationship of single-step state transitions may be inferred by a simple causal function of the historically augmented states. Then, we propose a convolutional neural network architecture to learn the representation of the current state and its historical trajectory. The main idea of this representation learning is to compress the high-dimensional historical trajectories into a low-dimensional space. In this way, we can extract the simple causal relationships from historical information and avoid the overfitting caused by high-dimensional data. Finally, we formulate Historical Augmentation Aided Actor-Critic (HA3C) algorithm by adding the learned representations to the actor-critic method. The experiment on standard MDP tasks demonstrates that HA3C outperforms current state-of-the-art methods in terms of both sample efficiency and performance." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Deep reinforcement learning", "Sample efficiency", "State representation", "Historical augmentation", "Markov decision processes" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/24d4ad5d94749e668240e1baf265743293c8880d.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/5fe097330a967d43198f8c0b820f829281eb6344.zip" }, "title": { "value": "Beyond Markov Assumption: Improving Sample Efficiency in MDPs by Historical Augmentation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v9LjNopQ6W
Do Not Mimic My Voice: Teacher-Guided Unlearning for Zero-Shot Text-to-Speech
main
Active
zero-shot tts;machine unlearning;voice privacy
alignment, fairness, safety, privacy, and societal considerations
3;3;5;8
4;4;3;4
2;4;3;3
2;1;2;3
2;2;2;3
4.75
3.75
3
2
2.25
-0.070535
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the above weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is rooted in a meaningful motivation. The advancement of zero-shot TTS systems raises ethical and privacy concerns and highlights potential misuse for fraudulent activities. This relevance to real-world issues underscores the importance of the research.\n2. The introduction of the novel metric spk-ZRF for evaluating speaker randomness in 'forget prompts' is a commendable aspect of the paper. This new metric contributes to the field by providing a quantifiable measure to assess the efficacy of the proposed unlearning mechanism.\n3. The clarity and quality of the figures presented in the document significantly aid in understanding the complex concepts and methodologies described, making the results and processes more accessible and comprehensible to the audience." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces TGU (Teacher-Guided Unlearning), a novel mechanism that enhances zero-shot text-to-speech (TTS) systems to address ethical and privacy concerns. To effectively assess the speaker randomness of the 'forget prompts' used within the system, the authors have developed a new metric named spk-ZRF. The experimental findings presented in the study validate the effectiveness of the proposed TGU framework, showcasing its potential to significantly improve the safety and privacy aspects of TTS technologies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The \"fine-tune\" process of TGU is based on given forgotten prompts, which naturally raises a question regarding how this framework performs with other prompts from the same forgotten speaker. Moreover, it is unclear how the framework handles prompts from other retained speakers who have similar timbres with a forgotten one. The paper currently lacks a discussion on these aspects, which are crucial for comprehensively evaluating the robustness and versatility of the proposed TGU framework.\n2. The TGU mechanism resembles a distillation process, suggesting another potential baseline worth exploring. The approach could involve using the zero-shot TTS model to generate audio with the retained speaker style using text from the forgotten set, then employing the generated audio prompt to potentially address the issues mentioned in L234-236 about speaker confusion and privacy constraints.\n3. There are inaccuracies in the bold results presented in Table 1; the WER-F and WER-R recorded for the TGU approach are higher than those of the Exact Unlearning and Fine Tuning methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "There is no information about the demographics, compensation, or criteria for hiring human subjects for the subjective evaluation. Please add this information to the appendix. Also, please indicate whether you have obtained any IRB approval for these evaluations." }, "flag_for_ethics_review": { "value": [ "Yes, Responsible research practice (e.g., human subjects, data release)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What is the intended use case of this unlearning scheme? \n\n2. How much data is needed to recover the unlearned speech in both the non-zero-shot setting (where the forget speaker is in the training set) and the zero-shot setting (where the forget speaker is not in the training set)?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "* **Originality**: This work is the first in the field to define voice unlearning for zero-shot TTS (ZS-TTS) and propose a simple solution with synthetic data to address it. It has also proposed a new metric, spk-ZRF, to examine the degree of reconstructability of the target speaker that is supposed to be unlearned. \n\n* **Quality**: The paper has compared several baselines with various metrics and demonstrated the effectiveness of this proposed method. It also has a nice visualization to showcase the effects of various methods for achieving this goal. \n\n* **Clarity**: The presentation of the paper is fairly clear, with all necessary symbols defined, which made it not difficult to follow.\n\n* **Significance**: It is the first paper in ZS-TTS to address the voice unlearning problem with a new metric to account for the reconstructability of the target speaker that can have a significant influence on future works in this field." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new problem to address for the zero-shot TTS model, which is to unlearn an existing voice. The authors provide a simple solution, which is to fine-tune the model on the original training set along with a newly defined target generated by the original teacher model without the target speaker as the reference. As a result, the fine-tuned model keeps the original performance on other speakers while generating random speaker identities for the selected speakers whose voices are supposed to be removed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The major weakness of this work is its use case is unclear. The problem being solved is not well-motivated, especially in a non-zero-shot setting. Under what condition would the user (in this case, the model trainer or machine learning engineer) re-train the model with all the training data to remove some voices? What benefits does this method provide? It is not obvious to come up with a practical use case for the proposed method where the entire training data is needed to fine-tuned the model for 145k steps (more than a quarter of the 500k steps of the base model training), and the specific speaker that needs to be forgotten has to come with at least 5 minutes of their training audio. In fact, some recent work [1] in image generation has provided more interesting methods for zero-shot unlearning, where only a single image is needed for the model to stop generating the target facial identity. \n\nSince this method requires the entire training data of the original model, 5 minutes of audio for the forget speaker, and more than a quarter of training iterations of the original model, the actual significance of this work is rather limited. For example, in the case voice unlearning is to be used by a cloud service provide, if a zero-shot TTS service provider wants to prevent the model from cloning certain voices, they can easily use a speaker verification model to check whether the provided prompt speaker collides with a database of speaker embeddings whose voices are not supposed to be used and stop providing the service if the provided voice is in the forbidden speaker database. On the other hand, if it is for an open-source model, it is also possible to fine-tune the model on some other dataset for the model to regain the ability to clone the forgotten speaker's voice. From the paper, it is unclear how much data is needed to recover the forgotten speaker's voice as the paper does not show it. The significance of this work could be higher if the proposed method requires an enormous amount of data for the unlearned model to regain its ability to reproduce the voice. However, since no experiment has been conducted, it is unknown whether this work would benefit the open-source community either. \n\nDue to these reasons, the significance of this paper is limited in its current state. It is suggested that the authors provide more motivations and practical use cases of the proposed method and the initial problems of unlearning certain voices to begin with, as it is a new problem proposed by this paper, and so far, the problem does not seem to be very meaningful, and the proposed solution makes it even less effective in practice. \n\n\n[1] Seo, J., Lee, S. H., Lee, T. Y., Moon, S., & Park, G. M. (2024). Generative Unlearning for Any Identity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9151-9161)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Perform a computational analysis detailing computational costs, training time requirements, comparison of computational overhead with\nbaseline approaches, and inference time and resource requirements." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The key strengths of the paper include:\nPrivacy Concerns: The rapid development of ZS-TTS raises ethical issues, particularly the unauthorized replication of individuals' voices, necessitating effective machine unlearning techniques to protect voice privacy.\nTGU Framework: Proposed TGU is the first machine unlearning framework specifically designed for ZS-TTS. It utilizes a pre-trained teacher model to guide the generation of speaker-randomized outputs, effectively helping the model to forget specific speaker identities while maintaining performance for others.\nRandomness in Outputs: Unlike traditional unlearning methods, TGU incorporates randomness in voice styles when the model encounters prompts related to forgotten speakers, which helps neutralize the model's responses to these prompts.\nEvaluation Metrics: The paper introduces a new evaluation metric, speaker-Zero Retrain Forgetting (spk-ZRF), to measure the effectiveness of the unlearning process. The results indicate that TGU not only limits the replication of forgotten voices but also preserves the quality of speech generation for remaining speakers." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a novel Teacher-Guided Unlearning (TGU) framework, which allows models to forget specific speaker identities while retaining the ability to synthesize speech for other speakers. This is particularly relevant given the potential misuse of ZS-TTS systems that can replicate voices without consent. The proposed method is built on top of VoiceBox (Le et al. 2024, from Meta) which has reached the SOTA as a ZS-TTS model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Evaluation Metrics: The introduction of spk-ZRF is a valuable contribution, as it provides a quantitative measure of the unlearning effectiveness. However, the paper could benefit from a more detailed explanation of how this metric compares to existing metrics in the literature.\n\nRandomness Implementation: The paper emphasizes the importance of randomness in unlearning, yet it does not sufficiently address potential trade-offs between randomness and speech quality. The balance between generating random outputs for forget speakers while maintaining high fidelity for others needs further exploration.\n\nComplexity of Implementation: The introduction of randomness may complicate the training process and could lead to inconsistent performance across different applications. A clearer discussion on how to balance randomness with quality would be beneficial.\n\nLimited Scope of Forgetting: The focus on only preventing replication of specific voices may overlook broader implications, such as how unlearning affects overall model performance or its ability to generalize across different tasks. A more holistic approach could provide deeper insights into the trade-offs involved.\n\nDataset size: Used Dataset size is relatively small, may not representative of practical scenarios." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Have the authors try their approach on open sourced models?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "It seems the authors are first to work on preventing voice clone using machine unlearning." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an unlearning framework for zero-shot text-to-speech (TTS) to prevent the model from replicating the speech characteristics of specific speakers.\n1. The authors appear to misunderstand the concept of zero-shot TTS. In Figure 1, it seems that the model functions when the speaker is included in the training set. However, in true zero-shot TTS, speakers should not be present in the training data. This discrepancy undermines the authors’ claim that the framework is suitable for zero-shot TTS.\n2. For practical relevance, it would be beneficial to demonstrate the results across multiple models, preferably open-source ones. However, the authors only tested their framework on Voicebox, which is not open-source.\n3. The authors acknowledge that model performance degrades significantly as the number of “forgotten” speakers increases, which raises concerns about the practicality of this approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.The models do not work well when forgot speakers number increase. 2. The only model they used are not open-sourced." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Enhancing voice privacy through machine unlearning in zero-shot text-to-speech" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024do,\ntitle={Do Not Mimic My Voice: Teacher-Guided Unlearning for Zero-Shot Text-to-Speech},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v9LjNopQ6W},\nnote={under review}\n}" }, "abstract": { "value": "The rapid advancement of Zero-Shot Text-to-Speech (ZS-TTS) technology has enabled high-fidelity voice synthesis from minimal audio cues, raising significant privacy and ethical concerns. In particular, the ability to replicate an individual’s voice without consent poses risks, highlighting the need for machine unlearning techniques to protect voice privacy. In this paper, we introduce the first machine unlearning framework for ZS-TTS, Teacher-Guided Unlearning (TGU), designed to ensure that the model forgets designated speaker identities while retaining its ability to generate accurate speech for other speakers. Unlike conventional unlearning methods, TGU leverages randomness to prevent consistent replication of forget speakers' voices, ensuring unlearned identities remain untraceable. Additionally, we propose a new evaluation metric, speaker-Zero Retrain Forgetting (spk-ZRF), which measures the model’s effectiveness in preventing the reproduction of forgotten voices. The experiments conducted on the state-of-the-art model demonstrate that TGU prevents the model from replicating forget speakers' voices while maintaining high quality for other speakers. The demo is available at https://speechunlearn.github.io/" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "zero-shot tts", "machine unlearning", "voice privacy" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3e066dba7c75d6123d781cf147a036991d5b1d05.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/3cd7137d96e08a1c979eac0960ad44a09ee0adc2.zip" }, "title": { "value": "Do Not Mimic My Voice: Teacher-Guided Unlearning for Zero-Shot Text-to-Speech" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
v9fQfQ85oG
Multi-objective Multi-agent Reinforcement Learning with Pareto-stationary Convergence
main
Active
Multi-objective;multi-agent reinforcement learning;Pareto-stationary convergence
reinforcement learning
3;5;5;6
4;2;3;2
2;3;2;3
2;3;3;2
3;2;1;3
4.75
2.75
2.5
2.5
2.25
-0.899229
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "See in weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. By using a graph-truncated Q-function that only relies on local state-action information, the algorithm avoids the exponential growth of the global state-action space.\n2. The algorithm is mathematically proven to converge to a Pareto-stationary solution at a rate of O(1/T)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel algorithm for multi-objective multi-agent reinforcement learning (MOMARL) via graph-truncated Q-function approximation method, which only requires local state-action information from each agent's neighborhood rather than global data. Additionally, they introduce the concept of an action-averaged Q-function, reducing the dimensionality further to local states and actions, and establish an equivalence between the graph-truncated Q-function and action-averaged Q-function for policy gradient approximation. They develop a distributed, scalable algorithm with linear function approximation and prove it converges to a Pareto-stationary solution at a rate of O(1/T)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Writing is rush and poor.\n2. The main conclusions and theoretical results are introduced relatively late, with the first major lemma only appearing on page 4 in a 10-page paper. This delay may reduce the immediate impact and engagement for readers, as foundational results that frame the work are postponed.\n3. Many equations are labeled even though they are referenced only once. This creates unnecessary clutter and can impede readability. Reducing labels to only frequently referenced equations would improve flow and make the reading experience smoother.\n4. The expression of the goal in multi-agent, multi-objective reinforcement learning (MPMARL) is unclear, particularly in maximizing a vector with potentially correlated values. \n5. Definition 2 (ε-Pareto-stationarity) lacks citation of relevant articles. Providing references to foundational works on Pareto-stationarity, along with an explanation, would help readers connect the definition to established literature.\n6. Assumptions 1 and 2 are verbose, which affects conciseness and clarity.\n7. Lemma 1 lacks a proof or reference to an appendix section, as well as citations of relevant works. This absence is also in Lemma 3.\n8. The start of Section 3 does not provide sufficient motivation for addressing the algorithm’s reliance on the global state-action space." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Is Assumption 1 connected to the ergodicity assumption typically applied in the policy gradient-type analysis?\n\n2. Where is the critic approximation error formally defined in the main text of the paper?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Good presentation and write-up." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a linear actor-critic-based algorithm for multi-objective multi-agent reinforcement learning. The authors claim to achieve a $1/T$-Pareto stationarity by allowing the agents to collect their neighbors' information in $T$ iterations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Minor Comments**\n\n1. The notations $\\mathcal{S}\\_{-\\mathcal{N}}$ and $\\mathcal{A}\\_{-\\mathcal{N}}$ are not properly defined/introduced anywhere.\n\n2. The concept of exponential decay property is quite old (at least in single-objective MARL setups). The authors should give proper citations to it.\n\n**Major Comments**\n\n3. Although the authors claim that the agents need to communicate only with their neighbors, I see that some parts of the algorithms are centralized, e.g., the updates of the policy parameter $\\boldsymbol{\\theta}$ and the Pareto parameter $\\boldsymbol{\\lambda}$. This should be highlighted in the introduction/contribution part of the paper.\n\n4. How is the optimization $(25)$ solved that computes $\\hat{\\boldsymbol{\\lambda}}$? Is it solved by a single agent in a centralized manner or is it done in a decentralized fashion? This should be clarified in the paper. Since nothing is mentioned, I will assume this has to be done in a centralized fashion. \n\n5. If $(25)$ is indeed solved in a centralized fashion, does it not violate the main motivation of the paper i.e., the agents only need to know their neighbours' information and not everyone else's?\n\n6. Theorem 2 shows that the gradient error bound is $\\mathcal{O}\\left(\\frac{1}{T}+\\frac{N}{B}+\\gamma^{2H}\\right)$ where $T$, $B$, $H$ are explained in the paper. However, both in the introduction and in the abstract, the authors present the error as $\\mathcal{O}\\left(\\frac{1}{T}\\right)$ and ignore other factors. Why so? In a sample-based learning setup, the sample complexity (determined by $T, B, H$) is more important than the iteration complexity (determined solely by $T$). \n\n7. A proper communication complexity analysis is missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Can you explain more about the reward formulation mentioned in Section 5? I understand that the vector size for the reward is shaped by the number of objectives, but how is the reward value itself selected? \n- In Figure 3, how many runs were used to generate the reward curve (b) and policy gradient curve (c)? Is it possible to show the standard deviation across runs?\n- Can you elaborate more on the simulation experiment you used? The Zhou et. al 2023 paper mentions the simulation they use in the appendix, but not a detailed description of the simulator itself. \n- Why did you select 3-3-2 vs 5-5-5-3 graph structures? How does 5-5-5-3 challenge the algorithm beyond increase in computational complexity?\n- How does this method extend to sparse graphs with lower K values? Are there any assumptions on the level of connectivity for the graph network?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors were thorough with their proofs for each algorithm component. The timing tests across algorithms effectively demonstrated the algorithm's computational efficiency. The contribution of this work is clearly stated." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on the problem of multi-agent multi-objective optimization through multi-agent reinforcement learning. In specific, the authors seek to find a scalable methodology to find a set of multi-agent Pareto-stationary solutions (e.g. solutions where no objective can be unilaterally improved without sacrificing another). The authors propose a graph truncated Q-function approximator and action-averaged Q-function for policy gradient approximation. They use a linear approximator for the action averaged Q-function, thereby reducing the dimensionality of the state. Through proofs and experiments, they demonstrate the convergence properties of their algorithm and improved computational efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In the robot path planning experiment section, the experimental section could be strengthened with additional comparisons against other algorithms in the literature. For example, MO-MIX [1], MOMAPPO [2], and PACDCG [3] could be interesting points to compare. Similarly, there could have been more references to existing MOMARL work in the introduction beyond existing work in single agent MORL. It also would be helpful to add further context about the environment simulator and the associated objective function parameters selected, and reasoning behind the graph structure.\n\n[1] MO-MIX: Multi-Objective Multi-Agent Cooperative Decision-Making with Deep Reinforcement Learning, Hu et al. 2023\n[2] MOMALand: A Set of Benchmarks for Multi-Objective Multi-Agent Reinforcement Learning, Felten et al. 2024. \n[3] Pareto Actor-Critic for Equilibrium Selection in Multi-Agent Reinforcement Learning, Christianos et al. 2024" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "-\tSince I am not an expert in the MOMARL literature, I am wondering whether there exist other approaches to MOMARL problems (apart from the one mentioned in lines 62 to 66)?\n-\tHow crucial is the assumption of softmax policies for the paper? How would other policies work?\n-\tDefinition 2, Lemma 1: Are there any assumptions on $J$ or $r$ to ensure that the gradient exists?\n-\tWhy is the MORL algorithm of (Zhou et al., 2024) just used for the larger network? Wouldn’t it make sense to also include it as a third option in the first network?\n-\tThe MORL algorithm of (Zhou et al., 2024) seems to perform worse than the initialization in Figure 3 (b) and therefore appears to learn nothing useful. Is this behavior justifiable by the “approximation of the global Q-function” (line 526)?\n-\tHow well does the proposed approach scale with the number of agents $N$ and the size of the network?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "-\tThe introduction is well-written. It provides intuitive examples, gives an overview of the existing literature and states open problems. Finally, the authors clearly pose their research question and their contributions.\n-\tThe paper is structured convincingly. Although the mathematical notations are quite extensive, readers can follow the basic ideas because the authors frequently explain their next steps and theoretical findings.\n-\tThe algorithmic findings are complemented by extensive theoretical results. Furthermore, the proposed approach seems to outperform an existing method on the given robot path planning example." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new solution concept to the challenging class of multi-objective multi-agent problems. The authors combine graph-truncated Q-functions and action-averaged Q-functions with a linear function approximation to obtain their learning algorithm. The derivations are based on theoretical results. The paper concludes with an illustrative robot path planning problem to demonstrate the capabilities of the proposed approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-\tThe paper focuses on finding Pareto-stationary solutions, as the authors state in line 161, instead of Pareto-optimal solutions. While I understand the authors’ reasoning and their arguments for focusing on stationarity, it still is a rather severe limitation. Is there any possibility to extend the method to Pareto optimality (other than assuming a convex problem)?\n-\tLemma 2 appears to be almost identical to Lemma 3 in (Qu et al., 2020a) and therefore provides no new significant insights. Also, the appendix section title “A.1 The Detailed Proof of Lemma 2” suggests a detailed proof but the proof essentially only refers to the existing result of (Qu et al., 2020a) which makes the section title misleading in my opinion.\n-\tAs the authors themselves state, Lemma 3 is also similar to results in (Qu et al., 2020a). This raises the question whether there are any substantial novelties in Subsection 3.1. The authors should explain more precisely how their results relate to those in (Qu et al., 2020a).\n-\tThe experiment section is somewhat limited because there is just one robot example on two rather small networks with few agents. It would be helpful if the authors could include further examples and elaborate on the scalability of their algorithm with respect to the number of agents and the size of the network.\n\nMinor comments:\n\n-\tMaybe remove the mathematical expressions from the abstract, e.g., just write “state-action” and skip “(s, a)” in the abstract\n-\tLine 74: introduce the neighborhood state-action notation $(s_{\\mathcal N}, a_{\\mathcal N})$ before using the mathematical expression. I would suggest to just move the mathematical terms like $(s_{\\mathcal N}, a_{\\mathcal N})$ to Section 2.\n-\tPage 2, footnote: Personally, I would remove the footnote and include the information either in the main text or defer it to the appendix.\n-\tLine 98 and following: Are there any restrictions or assumptions on the local state and action space. For example, are they finite or continuous?\n-\tLine 116: If I understand correctly, $s_0$ refers to the initial state of the whole system and not just the state of one agent. Could you emphasize this by adding something like $s_0 \\in \\mathbb{S}$?\n-\tLine 131: Maybe I missed it, but are $S_i$ and $A_i$ assumed to be finite?\n-\tFigure 1: The font size is very small which makes it hard to read the figure. This is especially unfortunate since the figure seems to visualize many of the key ideas in the paper and provides an important overview.\n-\tIn general, the authors could mention the assumptions underlying each theoretical result, for example: “Lemma 2: Under Assumption 2, the MOMARL problem satisfies …”\n-\tTo my understanding, Theorem 1 is immediately obtained by just plugging equality (18) into inequality (15). I am not sure if this requires a detailed proof in Appendix A.4.\n-\tThere are some typos in the paper, such as “peoposed” (line 247) and “shwn” (line 483)\n-\tLines 378-379: Isn’t the sentence “In order to analyze the Pareto-stationary convergence of Algorithm 1.” missing a second sentence part?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024multiobjective,\ntitle={Multi-objective Multi-agent Reinforcement Learning with Pareto-stationary Convergence},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=v9fQfQ85oG},\nnote={under review}\n}" }, "abstract": { "value": "Multi-objective multi-agent reinforcement learning (MOMARL) problems frequently arise in real world applications (e.g., path planning for swarm robots) or have not been explored well. To find Pareto-optimum is NP-hard, and thus some multi-objective algorithms have emerged recently to provide Pareto-stationary solution centrally, managed by a single agent. Yet, they cannot deal with MOMARL problem, as the dimension of global state-action $(\\boldsymbol{s},\\boldsymbol{a})$ grows exponentially with the number of spatially distributed agents. To tackle this issue, we design a novel graph-truncated $Q$-function approximation method for each agent $i$, which does not require the global state-action $(\\boldsymbol{s},\\boldsymbol{a})$ but only the neighborhood state-action $(s\\_{\\mathcal{N}^{\\kappa}\\_{i}},a\\_{\\mathcal{N}^{\\kappa}\\_{i}})$ of its $\\kappa$-hop neighbors. To further reduce the dimension to state-action $(s\\_{\\mathcal{N}^{\\kappa}\\_{i}},a\\_{i})$ with only local action, we further develop a concept of action-averaged $Q$-function and establish the equivalence between using graph-truncated $Q$-function and action-averaged $Q$-function for policy gradient approximation. Accordingly, we develop a distributed scalable algorithm with linear function approximation and we prove that it successfully converges Pareto-stationary solution at rate $\\mathcal{O}(1/T)$ that is inversely proportional to time domain $T$. Finally, we run simulations in a robot path planning environment and show our algorithm converges to greater multi-objective values as compared to the latest MORL algorithm, and performs close to the central optimum with much shorter running time." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multi-objective", "multi-agent reinforcement learning", "Pareto-stationary convergence" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/497f2e0445646bf1bb05a3ff7043ddfb256bb03b.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/9f58e2224b10ed5d6840d01a435ec737ae40ab8c.zip" }, "title": { "value": "Multi-objective Multi-agent Reinforcement Learning with Pareto-stationary Convergence" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vAoyZWyDEc
Approximating Optima of Nonconvex Functions
main
Withdraw
Computablity of Approximate Optima;Non-convex functions
optimization
K Lakshmanan
~K_Lakshmanan1
1;3;3;3
3;4;2;5
3;3;2;1
1;1;2;1
1;1;1;1
2.5
3.5
2.25
1.25
1
0.258199
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please could the authors address the points in the Weaknesses section?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The paper attempts to study lower bounds for nonconvex optimization. It claims that its contributions are to give negative answers to the following questions about any continuous nonconvex function $f$. \n\n(1) Can a Turing machine with zero-order function access to $f$ compute its global optimum? \n\n(2) Can a Turing machine with ZO access to $f$ compute $x^\\star$ (the global minimizer of $f$)? \n\n(3) Can a Turing machine with ZO access to $f$ compute an $\\varepsilon$-approximation to $f(x^\\star)$? \n\nThe paper claims that these contributions are new." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper attempts to study lower bounds for nonconvex optimization. It claims that its contributions are to give negative answers to the following questions about any continuous nonconvex function $f$. \n\n(1) Can a Turing machine with zero-order function access to $f$ compute its global optimum? \n\n(2) Can a Turing machine with ZO access to $f$ compute $x^\\star$ (the global minimizer of $f$)? \n\n(3) Can a Turing machine with ZO access to $f$ compute an $\\varepsilon$-approximation to $f(x^\\star)$? \n\nThe paper claims that these contributions are new." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "From my understanding of the paper, there seems to be a lack of soundness in its claims, as I elaborate below. \n\n---- \nThe authors aim to establish that approximating the global minimum of a nonconvex function is non-computable, building this conclusion through Lemmas 2.3, 2.4, and 2.5, which feed into Theorem 2.6. \n\nHowever, the problem to me appears to be in **Problem 1.8**, which asks whether a Turing machine with oracle access to a continuous, nonconvex function $f$ can compute the exact global minimum. This question is already known to be infeasible (see Nemirovskii and Yudin, 1983). Thus, the answer to **Problem 1.8** is immediately “no,” without requiring additional undecidability arguments.\n\nThe authors attempt to support their claim through a sequence of undecidability reductions, but it seems to me that those also have a logical flaw:\n\n- **Lemma 2.3** asserts that deciding if a function is identically zero is undecidable. This is fine, because such a decision would require querying *every* possible point in the function’s domain.\n \n- **Lemma 2.4** leverages Lemma 2.3 for a *special* case: it shows that if a positive function $f$ has a global minimum of zero, then determining whether an $\\varepsilon$-approximation to this minimum is possible is equivalent to determining if $f$ itself is everywhere zero --- an undecidable problem by Lemma 2.3. This logic holds *within the specific setup*.\n\nThe authors then state **Theorem 2.6**, claiming that no algorithm can compute an $\\varepsilon$-approximation of the global minimum for any continuous nonconvex function given oracle access to $f$. Here, they make a **critical leap in logic**: they extrapolate, without justification (and, in fact, incorrectly) the undecidability result from Lemma 2.4’s special case (a positive function with a minimum of zero) to all nonconvex functions. \n\nThis generalization is the misstep—there’s no basis for assuming that all nonconvex functions would exhibit the same undecidability. Many nonconvex functions do allow for $\\varepsilon$-approximations of global minima. \n\nImportantly, these reductions seem, to me, unnecessary. Since **Problem 1.8** is already infeasible, the undecidability arguments are redundant (and, further, seem to have logical flaws).\n\n----\n\nThe authors also miss all the literature on lower bounds around nonconvex problems. I strongly recommend starting with the paper of Carmon, Duchi, Hinder, and Sidford (which by no means is the start of the study of this question, but is, in my view, a nicely written paper with good pointers to other literature and provided some major recent results and techniques)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "N.A." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "I do not know the strength of the paper; it needs to be more precise and cared enough before being reviewed carefully." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper seeks to find properties of function so that their optima and optimizer are not computable in the sense they described." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are lots of typos:\n- line 022 (two dots to end the sentence).\n- \"This is much stronger than saying they it is intractable.\"\n- \"Then we give a simple algorithm which converges to the global optima when this is known\" -> what is \"this\"?\n- \"We give an example of global optima property-basin of attraction. And if this is known,\" -> what is \"this\"?\n- etc...\n\nThe authors should not give the reviewers the impression that the paper was written in haste; otherwise, we will not take time to read and assess it.\n\nAdvice: Put as much context as possible. For instance:\n- \"We also see that if the Lipschitz constant is not known, the (approximate) optima is not computable.\" The \"see\" is not an argument; the authors should refer to proof or a paper proving that.\n- Different from \"computable real-functions setting\" -> The fact that it is different is irrelevant; we would like to know why this new setting would be more relevant or better quantify a set of functions encountered in practice, etc... \n- \"We now start with the definition of the standard Turing machine here.\" -> Add some context: Why does it matter, how is this indeed structural enough to define the notion of computable, why did the authors choose this notion of computable, and how does this notion translate to something useful for the ML/AI community?\n- etc.." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "I would like the opinion of the authors in the following:\n\nWhy computability is an important notion to study in the first place, and not just restrict ourselves in the question of tractability? I understand why Yurii Nesterov discusses algorithms that work using huge grids in his textbook, but I'm not sure that the optimization community nowadays should necessarily care about problems that can be solved with such expensive algorithms. On the other hand, tractability is much closer to what optimization problems can be solved practically." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper deals with computability issues in optimizing non-convex functions. This problem can be relevant to many real-world applications of optimization and certainly of interest for the ICLR community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents non-computability results for non-convex functions in the context of Turing machines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Presentation: The presentation is problematic. There are various linguistic issues and whole phrases and explanations that are totally unreadable to me. The paper looks like it was finished in rush and is certainly far below the ICLR publication standards presentation-wise. The discussion of related literature is poor. With a quick look in the cited paper by Lee et al., I can see much related previous work, which is not discussed at all in the current work.\n\nExamples of linguistic issues: \n\n1. All sentences starting with \"And\".\n2. \"We note that we consider an oracle setting, where the function values are given by an oracle. This is\ndifferent from the computablity of optima computable real functions studied for example in (Pour-El\n& Richards, 1989)\".\n3. \"We show more in this paper,\nthat this set S is not computable. This is much stronger than saying they it is intractable.\"\n4. \"We show that Lipschitz constant is an example of\na property a function must satisfy if its approximate optima is computable.\"\n5. \"We give an example of global optima property-basin of attraction. And if this is known, we\ngive an algorithm which converges to the global optima.\"\n\nThese are just a small sample from the first page. The paper needs certainly a lot of work to become of publication quality.\n\nContribution: I do not actually grasp the novelty of the contribution of this work. It seems to me that the main result is trivial. More precisely, Lemma 2.4 states that there is not algorithm to decide whether a point is an $\\epsilon$-approximation of the global optimum of a non-convex function if no other information except an oracle that computes the function values is given. Why this is surprising? Lemma 2.5 is essentially the same result with Lemma 2.4. Then, the main result (Theorem 2.5) states that there is no algorithm to compute an approximation of the global optimum. This looks trivial to me as the most expensive algorithm one could sketch using only access to function values, is to create a huge grid which will result in some point sufficiently close to the global optimum and then evaluate the function at all these points. This is discussed in the first chapter of the cited textbook of Yurii Nesterov. However, without additional information to relate function values and distances of points to the optimum (like Lipschitz continuity with known Lipschitz constant as the authors mention, btw this is Theorem 1.1.1 and not 1.1.2), there is no way to show that the point with the smallest value is indeed close enough to the optimum. I don't see what is groundbreaking here." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "-" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The results seem correct, and the proofs are pretty easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the computability of global minima of continuous nonconvex functions.\n\nIt is shown that global minima are not computable. Afterwards, a certain condition is suggested under which they are." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper is very clearly unsuitable for acceptance in my opinion, for several reasons:\n\n## Writing quality\n\nThe first thing that strikes the reader is that this paper is poorly written. For example, the opening paragraph already has several typos (\". .\") and grammatical errors (\"Global minima is...\" - minima is plural, \"by extreme value theorem\" without \"the\" etc.) which appear everywhere in the paper.\n\nYet I do not mean this just in terms of English and typos. The literature of nonconvex optimization and computability aspects thereof is very rich, and nearly no references and comparisons are given. Even the few references that are given are treated very oddly. For example, in the second paragraph it is written that the considered setting \"is different from... (and) \"more general than...\" - why? how so? what do the other papers even consider or study? This is never explained.\n\nThe section \"Real-worlds applications\" just goes over some of the most generic problems in which optimization is even applied, such as \"supervised learning\". It is not clear how this relates at all to the results in this paper.\n\nAnd so on...\n\n## Novelty?\n\nRelated to the fact that prior work is not really discussed, I would argue that the main results are nearly trivial, and are \"folklore\".\n\nThe main lower bound is proved by showing that a nonconvex function can hide function jumps in arbitrarily small neighborhoods, since no Lipschitz bound is assumed, which is trivial in my opinion. I strongly believe optimizaiton experts are well-aware of this, and multiple variations along these lines is stated throughout the literature.\n\n\nTo sum-up, this paper is clearly subpar for acceptance to ICLR in my opinion." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "studies the computability of approximating optima for non-convex functions" }, "_bibtex": { "value": "@misc{\nlakshmanan2024approximating,\ntitle={Approximating Optima of Nonconvex Functions},\nauthor={K Lakshmanan},\nyear={2024},\nurl={https://openreview.net/forum?id=vAoyZWyDEc}\n}" }, "abstract": { "value": "We study the computability of approximating optima of non-convex functions. We give a simple proof to show that the problem of finding the optimal value (and optimal point) or its approximation is not even computable in the oracle setting. We also give a property a function has to satisfy if its global optima can be approximated. Next we give an example of such a global property we call basin of attraction. Then we give a simple algorithm which converges to the global optima when this is known. Finally, we give some numerical results." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~K_Lakshmanan1" ] }, "authors": { "value": [ "K Lakshmanan" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Computablity of Approximate Optima", "Non-convex functions" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "lakshmanan|approximating_optima_of_nonconvex_functions" }, "pdf": { "value": "/pdf/8cc3563954280850a1244c30086102e6161a8c2e.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Approximating Optima of Nonconvex Functions" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vAuodZOQEZ
Physics-Informed Neural Predictor
main
Active
Fluid dynamics;Spatiotemporal prediction;Physics-informed learning
applications to physical sciences (physics, chemistry, biology, etc.)
3;3;5;8
4;4;5;3
2;2;3;4
2;2;2;3
3;1;2;3
4.75
4
2.75
2.25
2.25
-0.518321
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- For the sole real dataset, SEVIR, raises several concerns:\n - It is unclear why the MSE metric for the proposed method underperforms compared to most other baselines. A detailed analysis and discussion of this discrepancy would be beneficial.\n - Given the potential phase transition of water in the SEVIR dataset, it is questionable whether the fluid satisfies the incompressible property. The authors are encouraged to discuss the applicability of the equation loss function to real datasets with such characteristics.\n - I highly recommend the authors compare with the traditional numerical method pySTEPS[1], which predicts future fluid fields by estimating potential velocity fields and extrapolating optical flow. This method has the ability to accurately estimate extreme values.\n- The paper does not specify the fluid's Reynolds number, which is crucial for understanding the flow characteristics. The fluid in the Fluid 2D dataset appears to represent simple laminar flows. The authors are recommended to provide experiments with more turbulent datasets to enhance the paper's practical value.\n- The paper lacks information on how the baselines were trained, and some baselines exhibit abnormal flickering. The authors should recheck the training process for all baselines or explain these anomalies.\n- The visualization of velocity in the paper is not as intuitive as it could be. Utilizing tools such as *pyplot.quiver* in *matplotlib* to depict the velocity field based on flow field observations as supplementary material is suggested.\n- The grammar and expression throughout the paper should be improved. A thorough review and enhancement are advised.\n\n\n[1] https://pysteps.github.io/" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper combines equation loss with operator learning through the Navier-Stokes equation, integrating physics-driven and data-driven approaches by learning unobserved physical quantities.\n- An innovative point of the proposed method is incorporating the equation loss commonly used in PINN, including incompressible Navier-Stokes equations, into predicting potential physical quantities of the future flow field.\n- The figures provided in the paper can accurately reflect the characteristics of the model.\n- The ablation of the paper accurately explains the role played by each module." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed the PINP method for fluid prediction. The method predicts future fluid fields by learning velocity and pressure simultaneously from partial observations. The authors employ a physical inference neural network to predict several physical quantities of the flow field at a particular moment. For the next timestep, they utilizes discrete PDEs predictor and correction network to generate the flow field. The training of the model is refined through the application of MSE loss, equation loss, and a temporal constraint loss. The proposed method shows advantages compared to several baselines on both synthetic and real-world data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper has some weaknesses, including:\n- Some datasets are too simple or may theoretically not match the methods to some extent.\n - The fluid motion in the Flow 2D dataset is relatively slow, and the dynamics are not as complex as those encountered in more advanced fluid dynamics scenarios.\n - The fluids in real datasets may not strictly adhere to the incompressible Navier-Stokes equations. Consequently, the physical constraints proposed in this paper might encounter limitations when applied to more diverse or complex fluid systems.\n- The improvements observed in specific datasets, such as Smoke3D, are modest. For example, the MAE and MSE metrics of Smoke 3D only demonstrate a marginal enhancement.\n- The paper's grammar and expression could be improved. In some instances, the clarity of the writing detracts from the overall quality of the paper, potentially hindering the reader's understanding of the research.\n\nSpecific issues and possible improvements will be discussed in the next section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please check the weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed method enhances the simulation capacity of PDEs, especially on the long-term prediction.\n2. The proposed method is tested on multiple benchmarks across different scenarios, especially on the real-world measured dataset.\n3. The authors vividly demonstrated the simulation process through videos." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work presents a new physics-informed learning approach that enables the prediction of coupled physical states, under a partially observed data environment. It applies the discretization of physical equations, integrating into the model architecture and loss function. The superior performance is shown in four benchmarks including a real-world data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The reviewer believes that there are significant issues with the introduction of the method. The method is complex and lacks an overview of the proposed method. The reviewer understands that the proposed method first outputs $p(t')$, $u(t')$, and $c(t')$ through a physical inference network, where these three outputs are constrained by physical and temporal conditions. Then $\\hat{c}'(t+1)$ is computed through discretized PDEs, after which $\\hat{c}'(t+1)$ and $c(t)$ are fed into another network for prediction, while simultaneously incorporating a data loss with the label. If this understanding is correct, the reviewer questions the novelty of this work, as it merely sandwiches numerical FDM calculations between neural networks. Moreover, the motivation for this approach remains unclear.\n2. In line 039, the statement is inaccurate and needs to be referenced to the literature. The reviewer points out that velocity fields can be observed through techniques such as PIV and PTV.\n3. In line 069, what does the past observable data mean. Authors should introduce more about the settings.\n4. In line 102, \"often difficult to obtain in practical applications\". The reviewer considers the statement is inappropriate, as initial conditions are typically obtainable when solving PDEs.\n5. In Table 1, the reviewer appears to have misinterpreted the meaning of the three categories in this table. If 'velocity' refers to velocity fields, then this table is not appropriate, as FNO (Fourier Neural Operator) is equally capable of predicting both velocity and pressure fields.\n6. In Eqn. 3, why does this equation still integrate from t to t+1? A more detailed derivation process is needed to help readers understand. This is crucial for comprehending the motivation behind the problem. What is the meaning of $\\Delta t$?\n7. In Sec. 3.4, the introduction is oversimplified, merely stating which networks are used. This raises two concerns for the reviewer: first, why was U-Net chosen over more advanced transformer architectures, and second, too many network structural details are omitted, forcing readers to consult the appendix for understanding.\n8. In Sec. 3.5, author should carfully introduce the training process as there are many networks and parameters. Are they trained in an end-to-end manner? This raises a question about how the physials inference network can simultaneously learn Pe and output flow fields. These two components might interfere with each other, potentially making the network untrainable. Has the use of stop-gradient operations like VQVAE been considered?\n9. What is the PDE for the real-world data? Is it explicitly known? Real data often comes with noise - has this method considered noise effects, or are there any approaches proposed to address the influence of noise?\n10. In Sec. 5, especially in Fig. 9(a), the authors need to specify the number of experimental trials conducted and report the confidence levels, as it appears that the two constraints overlap for an extended period of time.\n11. What is the detail setting of the fluid 2D data, including $\\nu$, $dx$, $dt$, and boundary conditions?\n12. The reviewer does not find the link of code and dataset from the paper. Code and data are important criteria for verifying the rationality of results. Will the author make them opensource?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- The gradient discretisation used here is a second-order central difference approximation. Could you elaborate on whether there were specific architectural reasons for choosing this method over other discretisation schemes? Additionally, it would be helpful to understand if you considered alternative discretisation approaches\nYou compare two different sets of baseline: one for now-casting and one for Navier-Stoke simulation. Why is your model capable of doing both, as other neural operator-based models are not? That seems like a significant advantage that has yet to be developed." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The paper is very well written and explains complex problems clearly.\n- While the idea of incorporating PDE-based constraints in the network architecture and loss function is not new, the author presents a set of methods, tricks, and ideas that make the technique work better than previous literature." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper's core idea is to combine a data-centric deep learning approach with physics by incorporating the discretised Navier-Stokes equations into the neural network architecture and constraining the loss function. By explicitly incorporating the governing equations and the associated physical quantities, the authors try to model the system and help with consistency, interpretability, and extrapolation capabilities. The extensive proposed experiments show good performances and generalisation to unseen domains." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Interpretability depends on the other physical quantities' models. While there are theoretical reasons to believe the quantities are interpretable, there is little experimental evidence.\n- The pertinence of benchmarking nowcasting and the advantage of this method over other neural operator-based methods for this task is unclear." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tAbout implementation of baselines.\n\nIn NowCastNet, ensuring eidetic prediction results is one significant contribution of this paper. However, as shown in Figure 8, its prediction is quite blurry. I am wondering how the authors experimented with this baseline.\n\nBesides, in the supplementary materials, the prediction results of LSM and FNO appear strange periodic shakes. Actually, I think a well-trained deep model will not make such weird predictions. Did the authors carefully tune these two baselines?\n\n2.\tAbout spatial generalization.\n\nWhy PINP can achieve spatial generalization? Can the authors provide some intuitive explanations?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper is overall well-written.\n\nThe idea of incorporating physics loss into fluid prediction is reasonable.\n\nThe authors have provided comprehensive experiments to verify the effectiveness of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new physics-informed fluid predictor, named PINP. PINP firstly estimates the underlying pressure and velocity filed from observed fluid, which is constrained by a discretized physics loss. Then it employs an interpolation formalization of integral for future prediction, where an additional correction network is presented to reduce the error of discretized PDE predictor. Experimentally, PINP performs well in 2D and 3D flows and weather prediction tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe title is kind of overclaimed. \n\n\nSince this paper is tailored to fluid prediction, I think “physics-informed fluid predictor” is more suitable. Otherwise, it is a little bit overclaimed, where there are extensive prediction tasks that PINP cannot solve, such as rigid body movement (controlled by classical physics) or magnetic field (governed by electromagnetism).\n\n2.\tA series of technical designs are underexplored or not well supported.\n\n(1)\tPINP adopts the discretized PDE loss for physics constraint, which may bring serious approximation error. The current design is based on the assumption that the differential operator can be approximated by spatial or temporal difference, which cannot be satisfied, especially in low-resolution data. Note that I am not saying that being physics-informed is a bad idea. The canonical physics-informed neural works employ the auto differential in neural works for approximation, which is much more precise than the discretization in PINP.\n\n(2)\tI cannot figure out that why additionally predicting the pressure field can boost the performance. As shown in Figure 2 (b), the predicted pressure field is only used in physical constraint loss, which cannot affect the future prediction process. This means that predicting the pressure field is just to fit the physical loss, which brings a new meaningless task. According to my experience, I think this design can only bring extra load to the model instead of benefiting the prediction. Besides, as shown in Figure 9(a), removing physical constraints will not bring a serious decrease. Further, How about keeping the second equation in Eq.(12) but removing the pressure-related one? I believe that the benefit of physics loss is mainly brought by the incompressible term loss. \n\n(3)\tThe design of the correction network is also weird. As formalized in Eq.(10), the inputs and outputs of the correction network are both expected to be close to the ground truth. Under this constraint, why correction network is necessary? (Minor: Eq.(10) may have a typo, where the comma should be “-”).\n\n(4)\tAbout the temporal loss. I am curious about how likely is this loss function to work. Some statistical results on how many times this loss is non-zero are expected.\n\nGoing further from (2), I doubt that the prediction of pressure field is useless in the current design, which is listed as one of the main contributions w.r.t. other papers. I think compared with Helmfluid, the advantage of PNIP lies in the physical loss, which can provide a more direct and explicit constraint to the velocity field.\n\nIn summary, I think there are many unsupported designs in the proposed method, which may affect the claim of the main contribution of this paper.\n\n3.\tAbout the efficiency. \n\nI am curious about the training overload. Since the calculation of loss in Eq.(16) may also cause extra computation costs than other baselines." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We integrated PDEs into the network and loss function, enabling the prediction of observable physical quantities and the inference of future latent physical quantities as interpretation." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024physicsinformed,\ntitle={Physics-Informed Neural Predictor},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vAuodZOQEZ},\nnote={under review}\n}" }, "abstract": { "value": "Accurately predicting fluid dynamics and evolution has been a long-standing challenge in physical sciences. Conventional deep learning methods often rely on the nonlinear modeling capabilities of neural networks to establish mappings between past and future states, overlooking the fluid dynamics, or only modeling the velocity field, neglecting the coupling of multiple physical quantities. In this paper, we propose a new physics-informed learning approach that enables the prediction of coupled physical states, under a partially observed data environment. Central to our method lies in the discretization of physical equations, which are directly integrated into the model architecture and loss function. This integration enables the model to predict future states of observable data while simultaneously inferring and predicting hidden physical quantities (e.g., velocity and pressure) purely from visual observations. By incorporating physical equations, our model demonstrates temporal extrapolation and spatial generalization capabilities. Experimental results show that our approach achieves the state-of-the-art performance in spatiotemporal prediction across both numerical simulations and real-world extreme-precipitation nowcasting benchmarks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Fluid dynamics", "Spatiotemporal prediction", "Physics-informed learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/74e55b137789276f1dd050ba3203de56c50a59e3.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/ee2ca64357c197584a2a61f2b98352eae9e6061b.zip" }, "title": { "value": "Physics-Informed Neural Predictor" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vC7AlY1ytz
OccProphet: Pushing the Efficiency Frontier of Camera-Only 4D Occupancy Forecasting with an Observer-Forecaster-Refiner Framework
main
Active
camera-only occupancy forecasting;efficiency;effectiveness;autonomous driving
applications to robotics, autonomy, planning
6;6;6;6
3;4;4;5
3;3;3;4
3;3;2;4
3;3;3;3
6
4
3.25
3
3
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What part of the Cam4DOcc model contributes to its high computational cost? How much memory does Cam4DOcc use during training and testing? Approximately how many GPU days are needed for full training?\n\n2. In multi-frame 3D occupancy results, how can environmental dynamics be effectively captured? The proposed Observer-Forecaster-Refiner pipeline seems to learn the dynamics of objects in the latent space without strict physical constraints. If the real world presents scenarios that are rare or unseen in the dataset, could 4D predictions fail? Are there any examples of such failures?\n\n3. 4D prediction is challenging, especially when occlusions are frequent, leading to potential voxel loss. Have the labels in the 4D dataset been completed to account for these occlusions? If they have, can the authors' method handle situations where any frame from the historical RGB images loses an object?\n\n\n4. Are the 6-DoF camera parameters in the Observer used to align historical 3D features with the current frame? In lines 188-191, why does F change to F_{motion}, resulting in an extra matrix dimension of 6×X×Y×Z? Is this converting the 6-DoF pose into a matrix?\n\n5. In lines 206-207, how does C+6 become C? Is this done through a 1×1×1 3D convolution?\n\n6. What is the difference between the E4A module shown in Figure 4 and the UNet structure? It looks like a 4D version of UNet. Why does the FLOPs increase significantly while the number of parameters in E4A decreases? Is this due to the upsampling process?\n\n7. In TAF, after applying 3D, 2D, and 1D global average pooling, the same scale features perform temporal attention. Would cross-attention between different scales yield better results?\n\n8. In the Forecaster, is the condition for prediction learned from past voxels?\n\n9. What does the colon symbol in line 313 mean?\n\n10. The comparisons could be more thorough. For example, it would be helpful to follow the output of the 4D BEV method, like [1], with a 2D to 3D detection head for comparison.\n[1] Bevdet4d: Exploit temporal cues in multi-camera 3D object detection" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The article is well-written and easy-to-follow.\n2. Figure 2 is helpful to understand the effect of OccProphet.\n3. The ablation study is helpful in demonstrating the benefits of each module in OccProphet.\n4. Extensive experimental results on several benchmarks back up the effectiveness of OccProphet." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces OccProphet, a novel camera-only framework for occupancy forecasting. It features an Observer-Forecaster-Refiner pipeline optimized for efficient training and inference, utilizing 4D aggregation and tripling-attention fusion on reduced-resolution features. Experimental results show that OccProphet outperforms existing methods in both accuracy and efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Some symbols and design details of the model are unclear.\n2. The analysis of failure scenarios is lacking, but this is not a major concern.\n3. See Questions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The current baselines lack sufficient comparison methods and evaluation metrics, which are inadequate to demonstrate the superiority of the proposed approach." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "a). OccProphet addresses the computational limitations of previous methods, a crucial improvement for deploying autonomous vehicles on edge devices.\n\nb). The authors’ writing style is clear and concise, effectively conveying the ideas and concepts of 4D occupancy prediction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "OccProphet is an efficient occupancy forecasting framework for autonomous driving, reducing computational costs by 58%–78% and achieving a 2.6× speedup over existing methods while maintaining high accuracy. Through its Observer, Forecaster, and Refiner modules, OccProphet predicts future 3D occupancy from 2D images, making it suitable for real-time edge deployment." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "a) Although the authors have distinctively named the three components as Observer, Forecaster, and Refiner to differentiate them from previous methods, the paper should more clearly highlight the distinctions from the traditional encoder-decoder architecture to better emphasize its contribution.\n\nb) The tripling-attention and reduced-resolution feature aggregation may sacrifice some granularity or detail in the forecasts, possibly affecting the precision of the model in dense scenarios.\n\nc) The paper lacks details on how well the model performs over varying forecasting time horizons, especially under extended timeframes, which are critical in autonomous driving scenarios." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1) Could the authors show experiment results on training for longer with both OCFNet (Cam4DOcc) and OCCPROPHET to see whether the performance improvement stems from early convergence.\n\n2) If not, is there any analysis or explanation about the performance improvement with respect to the information loss?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper propose OCCPROPHET to improve the efficiency of occupancy forecasting algorithm while maintaining and even improving the performance. The key idea is to embed a coarse-level 3D voxel features at the very beginning of the network to reduce computational cost and meanwhile utilize the temporal-spatial relationship to reduce information loss during embedding and forecasting. Finally, in order to guarantee prediction quality, the Refiner part enhance the feature quality with temporal-spatial correspondence and increase the granularity of the prediction. OCCPROPHET provides a better balance between efficiency and effectiveness in occupancy forecasting problem. The experiments are extensive and the results demonstrate both the efficiency and effectiveness of OCCPROPHET." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper propose a method called OCCPROPHET for efficient occupancy forecasting with camera-only inputs. The proposed framework is consisted of Observer, Forecaster, and Refiner. OCCPROPHET first embeds the sequence of camera images into 3D voxel features. Then Observer applies 4D feature aggregation to gradually reduce the spatial resolution of the 3D features and a tripling-attention fusion strategy on the lower-resolution 3D features to reduce information loss. The Forecaster component forecast the state of the environment based on the feature output from Observer. Finally, Refiner utilize temporal relationship to enhance the quality of the 3D voxel features to generate the final prediction of future occupancy. The key idea is reducing spatial resolution at the very beginning of the network to reduce computational cost during embedding and forecasting. The tripling-attention part take spatiotemporal interactions into account to reduce the information loss and the Refiner part makes use of temporal correspondence to increase the granularity of the prediction. The proposed method makes a good trade-off between efficiency and effectiveness. The extensive experiment results show that OCCPROPHET largely reduce computational cost while achieving slightly better performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The reviewer is concerned about the effectiveness part of OCCPROPHET. Since OCCPROPHET reduce spatial resolution at the beginning of the network, there should be information loss no matter what strategy OCCPROPHET uses to compensate that. At the same time, temporal-spatial interaction should also be utilized in OCFNet (Cam4DOcc). The reviewer is a bit confused about the better performance OCCPROPHET shows than that of OCFNet (Cam4DOcc). The reviewer is concerned about whether the performance improvement stems from the fact that OCFNet (Cam4DOcc) is underfit and OCCPROPHET converges faster." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What is the actual frames-per-second of the model. And is it good enough for deployment?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper introduces a new framework, OccProphet, which is designed to be efficient and effective for camera-only 4D occupancy forecasting, a critical capability for autonomous driving.\n2. The framework significantly lowers computational requirements by 58% to 78% compared to the state-of-the-art Cam4DOcc, making it more feasible for deployment on edge agents like autonomous vehicles.\n3. OccProphet achieves a relative increase in forecasting accuracy of 4% to 18% over existing methods, which is a substantial improvement in the field of autonomous driving perception." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a novel framework named OccProphet, designed to predict the 3D occupied status of driving environments based on historical 2D images, with a focus on improving efficiency and reducing computational demands during training and inference stages. This is particularly aimed at enhancing the feasibility of deploying such technology on edge agents like autonomous vehicles.The OccProphet framework consists of three lightweight components: the Observer, Forecaster, and Refiner. The Observer extracts spatio-temporal features from 3D data using an Efficient 4D Aggregation with Tripling-Attention Fusion method. The Forecaster and Refiner work together to conditionally predict and refine future occupancy inferences. The paper claims that OccProphet is both training- and inference-friendly, reducing computational costs by 58% to 78% with a 2.6x speedup compared to the state-of-the-art Cam4DOcc method, while achieving 4% to 18% higher forecasting accuracy. The experimental results are demonstrated using the nuScenes, Lyft-Level5, and nuScenes-Occupancy datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper does not give detailed explanations for the design of the three proposed modules." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper proposes OccProphet, a camera-only framework for efficient and effective occupancy forecasting, in a lightweight Observer-Forecaster-Refiner pipeline, performing better and 2.6 times faster than Cam4DOcc reducing computational costs." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024occprophet,\ntitle={OccProphet: Pushing the Efficiency Frontier of Camera-Only 4D Occupancy Forecasting with an Observer-Forecaster-Refiner Framework},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vC7AlY1ytz},\nnote={under review}\n}" }, "abstract": { "value": "Predicting variations in complex traffic environments is crucial for the safety of autonomous driving. Recent advancements in occupancy forecasting have enabled forecasting future 3D occupied status in driving environments by observing historical 2D images. However, high computational demands make occupancy forecasting less efficient during training and inference stages, hindering its feasibility for deployment on edge agents. In this paper, we propose a novel framework, \\textit{i.e.}, OccProphet, to efficiently and effectively learn occupancy forecasting with significantly lower computational requirements while maintaining forecasting accuracy. OccProphet comprises three lightweight components: Observer, Forecaster, and Refiner. The Observer extracts spatio-temporal features from 3D using the proposed Efficient 4D Aggregation with Tripling-Attention Fusion, while the Forecaster and Refiner conditionally predict and refine future occupancy inferences. Experimental results on nuScenes, Lyft-Level5, and nuScenes-Occupancy datasets demonstrate that OccProphet is both training- and inference-friendly. OccProphet reduces 58\\%$\\sim$78\\% of the computational cost with a 2.6$\\times$ speedup compared with the state-of-the-art Cam4DOcc. Moreover, it achieves 4\\%$\\sim$18\\% relatively higher forecasting accuracy. The code will be publicly available." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "camera-only occupancy forecasting", "efficiency", "effectiveness", "autonomous driving" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/fd985a2f4257751ef5999207495b6b1d1147ee80.pdf" }, "presentation": null, "primary_area": { "value": "applications to robotics, autonomy, planning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "OccProphet: Pushing the Efficiency Frontier of Camera-Only 4D Occupancy Forecasting with an Observer-Forecaster-Refiner Framework" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vDecbmWf6w
Zero-Shot Offline Imitation Learning via Optimal Transport
main
Active
Imitation Learning;Deep Reinforcement Learning;Optimal Transport
reinforcement learning
3;6;6;6
3;2;2;3
2;3;3;3
2;4;3;3
1;4;2;4
5.25
2.5
2.75
3
2.75
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. What is the additional computational cost introduced by solving OT problems compared to running just the planners?\n2. How does the method scale when there are more than 1 demonstration? Does the method have a problem with multimodal behavior in the bigger data collection?\n3. Can the authors apply the method to cross-domain / cross-embodiment settings?\n4. What if the expert policy is different from the resulting policy in TD-MPC? I.E can we observe a distributional shift if our expert policy comes from an environment with slightly different dynamics, for example, cross-embodiment (x-magical https://github.com/kevinzakka/x-magical)?\n5. Can the current method be altered to remove the model-based approach and replace it with the offline dataset of video collection? I.e why can't we learn value functions from just video data without rewards and actions and then use it in OT?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The method shows example of non-myopic behaviour compared to the baselines. The proposed approach does not depened on availability of actions in the expert trajectory. The proposed approach works as zero-shot \"agent correction\" policy. Authors evaluated their method on variety of different environments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce a new method for zero-shot imitation learning based on optimal transport. \nThe approach involves combining a modified goal-conditioned TD-MPC2 algorithm with optimal transport. The authors begin by training goal-conditioned value functions V and W, as well as a dynamics model obtained from TD-MPC2. Then they run the OT process using the Sinkhorn algorithm, given V, W, P, and the expert trajectory, to compute the cost matrix and transport matrix." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The proposed method requires both the learned world model and expert demonstrations. \nIn practice, the training world model requires a lot of resources and either access to a simulator or a large collection of datasets.\n\nThough, the authors positioned their paper as \"zero-shot IL\" it is still far away from \"fair\" zero-shot since it requires some conditions for the method to work:\n1. Access to the offline dataset \n2. Trained Transition model\n3. Expert trajectory." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Instead of conditioning on an intermediate goal, can we condition on a sequence of goals to mitigate myopic behavior?\n- Instead of optimal transport to minimize the distance between distributions, how does it compare to diffusion-based methods?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Identifies an existing problem of goal-conditioned policy learning and proposes a solution based on optimal transport.\n- Demonstrate success of ZILOT on a range of locomotion and manipulation tasks and show better alignment with demonstrations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper identifies an existing problem with goal-conditioned imitation learning: an agent that optimized to achieve individual goals can undermine long-term objectives. Instead of learning a goal-conditioned policy, ZILOT learns a dynamics model with a goal-conditioned value function. It uses optimal transport to minimize the divergence between the rollout policy and expert demonstration.\n\nExperiments show that ZILOT imitates demonstration more closely compared with MPC and goal-conditioned policy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Lack of explanation and analysis of the myopic behavior in goal-conditioned imitation learning and why ZILOT bypass those limitations.\n- Baselines are relatively simple. How does it compare with some diffusion-based methods, e.g. Diffusion Policy?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. How \"in-distribution\" are the inference tasks compared to the trajectories used to train TD-MPC? How does the quality of learned policy change we try to imitate more \"OOD\" expert behaviors?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper studies a very important problem: the ability to quickly learn new skills and policies from few expert demonstrations is a much-desired property of policies. The method is conceptually simple and well-motivated theoretically.\n\nI found the paper to be easy to read, and the background for all relevant concepts was motivated and well-introduced (within Sec 2-4)\n\nFigure 4 clearly demonstrates the main improvement over the sub-goal approach, where the MPC + CIs overshoot the puck when going to the first subgoal.\n\nThe experiments are relatively thorough (both in the main paper, and the Appendix), indicating improvement over other model-based zero-shot imitation methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an approach for zero-shot imitation -- use a single expert trajectory to extract a policy to mimic this task. The general approach is to solve an optimal transport problem (equipped by a goal-reaching cost function) with a model that can be used to estimate the policy's stationary distribution. Across a number of tasks (fetch, halfcheetah, pointmaze), the proposed approach can learn useful skills." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I had no major objections to the paper; \n\nHowever, it would have been nice to see comparisons to other approaches to zero-shot imitation (e.g. along the line of FB representations. Another axis that could improve the paper would be to see a wider range of downstream tasks (requiring the synthesis of more diverse motions), perhaps those from (Frans et al)\n\nMinor nit: There is a lot of content in the Appendix, but it is not clearly linked within the main paper. I would encourage the authors to link and discuss this content within the main paper, so that it is not missed by a reader." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. The paper is motivated by the fact that methods using goal-conditioned policies for zero-shot IL fail (Proposition 1 is used to support this argument). However, ZILOT plans using all the goal states in the demonstration. What if the goal-conditioned policy is given the expert demonstation as context? This way the goal-condition policy might discard the bad states to achieve a goal (shown in Proposition 1). Moreover, this would lead to learning a Markovian policy.\n2. The goodness of the proposed method depends on the planning horizon, and the paper discusses that ZILOT can be myopic too without a long planning horizon. This limits the applicability of the method in many real world tasks when planning for a long horizon. Is there a way to mitigate this by estimating the visitations using some form of bootstrapping? \n3. Since the method uses optimal transport and cites OTR [2] in the Related work, I feel it should be added as another baseline that uses the expert states and the sub-optimal dataset to recover a reward function and train a policy over it. \n4. Since sub-optimal data is used to learn the value-functions, can optimal V, W be recovered? If not, it is not highlighted how will this gap affect the final performance? \n5. The abstract talks about zero-shot IL with existing practical methods viewing demonstrations as a sequence of goals. I feel this is not true as FB-IL [1] does zero-shot IL from a single demonstration. Moreover, I feel FB-IL should be a baseline. Although, ZILOT deals with demonstrations with a subset of states, I feel using Eq 8 in [1] should recover the policy with partial demonstrations.\n\n### References\n[1] Pirotta et al., Fast Imitation via Behavior Foundation Models. ICLR'24\\\n[2] Luo et al., Optimal transport for offline imitation learning, ICLR'23" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The idea of recovering expert behaviors from partial trajectories with subgoals and without action labels is interesting and challenging. \n- The drawback of prior methods that used goal-conditioned policies for this task is discussed well to motivate the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a method for zero-shot imitation learning with a single demonstration (without action labels or partial trajectories consisting of a sequence of goals). A method called ZILOT that learns to plan and act according to the goals is proposed. Firstly, ZILOT learns a dynamics model (similar to TD-MPC2) using a dataset of transitions which can be sub-optimal. For planning using the demonstration, a non-Markovian method is employed to match the occupancy of the partial demonstration and the state-visitation distribution of the policy. The discrepancy between occupancies is computed using Optimal Transport which requires value functions: V to get reachability of a goal state and W to get the steps taken between goals. The non-Markovian planner uses this discrepancy to select the best action. Experiments conducted on multiple benchmarks show that ZILOT is better at W1 distance between expert and policy occupancies and better at generating trajectories that follow the order of goals in the demonstration." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Many design choices of the proposed algorithm are not clear. Why does ZILOT use a non-Markovian policy as I believe the expert policies are Markovian? How is the function $\\phi$ that maps states to goals defined / learned?\n- The experiments are not convincing as the baselines like FB-IL [1] and OTR [2] are missing. It is not clear why the task success rate or the average returns are not used for comparing methods. \n- The limitations of the methods are not discussed. The only limitation presented is in Sec 7 that describes the dependence on a learned dynamics model." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A non-myopic method for zero-shot imitation from arbitrary offline data." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024zeroshot,\ntitle={Zero-Shot Offline Imitation Learning via Optimal Transport},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vDecbmWf6w},\nnote={under review}\n}" }, "abstract": { "value": "Zero-shot imitation learning algorithms hold the promise of reproducing unseen behavior from as little as a single demonstration at test time.\nExisting practical approaches view the expert demonstration as a sequence of goals, enabling imitation with a high-level goal selector, and a low-level goal-conditioned policy. \nHowever, this framework can suffer from myopic behavior: the agent's immediate actions towards achieving individual goals may undermine long-term objectives.\nWe introduce a novel method that mitigates this issue by directly optimizing the occupancy matching objective that is intrinsic to imitation learning. \nWe propose to lift a goal-conditioned value function to a distance between occupancies, which are in turn approximated via a learned world model.\nThe resulting method can learn from offline, suboptimal data, and is capable of non-myopic, zero-shot imitation, as we demonstrate in complex, continuous benchmarks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Imitation Learning", "Deep Reinforcement Learning", "Optimal Transport" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a8415102d2b25392bff42306036d9592048fb50e.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Zero-Shot Offline Imitation Learning via Optimal Transport" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vDp6StrKIq
Beyond Canonicalization: How Tensorial Messages Improve Equivariant Message Passing
main
Active
equivariance;message passing;tensor representation;local frames;geometric deep learning
learning on graphs and other geometries & topologies
5;5;6
3;5;4
3;3;4
2;2;3
3;3;3
5.333333
4
3.333333
2.333333
3
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please address the problems in the weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written and technically sound.\n2. The paper provides comprehensive theoretical analysis.\n3. The experiment results are promising." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on the equivariant message passing and proposes a formalism which together with local canonicalization enables consistent communication of geometric features between different nodes. This method solves the problem of communicating geometric information between local patches with different coordinate frames and can be combined with other point cloud methods, achieving state-of-the-art results in the experiments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed method relies on point normals to establish local reference frames. However, estimating accurate normals is difficult for *real-world* point clouds due to severe noise. So I expect to see results on real-world tasks rather than only synthetic datasets.\n2. An important application of invariance and equivariance is point cloud registration. I expect to see the effectiveness of the proposed method on real-world point cloud registration tasks, such as 3DMatch and 3DLoMatch.\n3. In Tab.3, the tensor messages surprisingly outperforms the model with scalar messages under random local frames. The random local frames affect the performance of the model in the form of noise, but these noises help tensor messages perform better. I am very curious about its reason.\n4. In Tab.2, I notice refining frames brings marginal improvements, which may indicate that this step fails to obtain better normals. For comparison, I expect to see the results with (1) PCA-based normals and (2) ground-truth normals." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How does the proposed method handle cases where the predicted vectors $v_{1},v_{2}$ are zero or close to zero? Additionally, how sensitive is the frame selection mechanism when different levels of noise are added to the input point clouds? Does this sensitivity change in cases of more symmetric objects?\n- An addition of a discussion of gauge equivariant neural networks will benefit the completeness of the related section of this work." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The authors describe the proposed framework in detail, providing clear intuition about the specific problems each part of the framework addresses.\n- The simplicity of the proposed framework allows it to be easily applied to widely used non-equivariant message-passing architectures with minimal modifications.\n- The experimental results demonstrate how the proposed local canonicalization benefits the performance of the baseline model when it is used in tasks with various types of tensorial outputs (e.g. normal regression or point cloud segmentation)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes an extension of unconstrained message-passing architectures that makes them equivariant by canonicalizing the messages received by each node to its local frame. It enables local canonicalization of arbitrary types of tensorial messages, which extends previous works that restrict the allowed tensor type of messages (e.g. only allowing for scalars or vectors). In addition to the local canonicalization methodology, the authors introduce a mechanism for learning the local frame of each node, which is then refined in the later layers of the network. In the experimental section, the authors evaluate their proposed method on various rotational equivariant point-cloud tasks and provide ablation studies that showcase how the individual parts of the proposed framework affect its performance and generalization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The proposed canonicalization procedure assumes that the inferred output vectors $v_{1},v_{2}$ are non-zero. While the authors describe how they resolve ambiguities when $v_{1},v_{2}$ are parallel, they do not explain how they handle the case where the vectors are close to zero, which makes frame selection highly sensitive to small perturbations due to noise.\n- While in Section 2 the authors mention previous works on local-canonicalization during message passing, they do not discuss work on gauge equivariant neural networks, such as the work: \n [1] Pim De Haan, Maurice Weiler, Taco Cohen, Max Welling, \"Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric graphs\" ICLR (2021) \nwhich also transforms geometric features from one local frame to another during the message passing, performed in their case during the mesh convolution." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The main concern is that the experiments are not convincing for the main claim, some more challenging cases (e.g. multi body objects) should be included to show the effectiveness of the local geo features" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper has a smooth and easy-to-flow presentation into equivariance.\n- Maintaining local geometric features may be useful in more general settings (but not the examples shown in the paper, see weakness below). Which in the long run may benefit the equivariance community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The main claim of the paper is that maintaining local equivariant tensor geometric feature in the graph network is better than first canonicalize the feature in local frame and do invariant message passing. It also proposes a way to pass tensorial features in graph network. The proposed message passing is experimented on toy shape dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The main concern is that all the experiments are conducted on rigid objects. However, the reviewer believes that the main advantage of the local geometric features preserving throughout the network is to deal with some non-rigid, multi-body, or deformable objects. Indeed there is no strict equivariance in deformation but it is where the local feature should make a difference. Just as shown in Fig.1 in the paper, the geometric feature should help recognize the pattern of the sub-part when it deforms or move. However, the main experiment is conducted on the modelnet rigid object, which we know the performance is quite saturated, and the reviewer believes that a robust global PCA plus any modern large point network will outperform an equivariant network in such an easy setting.\n- Again, the comparison does not capture the full equivariant network baselines. We know that there are many more equivariant point networks compared on the same benchmark but they are not included in the table.\n- Some more clear discussion of the difference between the proposed message passing with previous ones like TFN or VNN should be highlighted in the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024beyond,\ntitle={Beyond Canonicalization: How Tensorial Messages Improve Equivariant Message Passing},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vDp6StrKIq},\nnote={under review}\n}" }, "abstract": { "value": "In numerous applications of geometric deep learning, the studied systems exhibit spatial symmetries and it is desirable to enforce these. For the symmetry of global rotations and reflections, this means that the model should be equivariant with respect to the transformations that form the group of $\\mathrm O(d)$.\nWhile many approaches for equivariant message passing require specialized architectures, including non-standard normalization layers or non-linearities, we here present a framework based on local reference frames (\"local canonicalization\") which can be integrated with any architecture without restrictions.\nWe enhance equivariant message passing based on local canonicalization by introducing tensorial messages to communicate geometric information consistently between different local coordinate frames.\nOur framework applies to message passing on geometric data in Euclidean spaces of arbitrary dimension.\nWe explicitly show how our approach can be adapted to make a popular existing point cloud architecture equivariant. We demonstrate the superiority of tensorial messages and achieve state-of-the-art results on normal vector regression and competitive results on other standard 3D point cloud tasks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "equivariance", "message passing", "tensor representation", "local frames", "geometric deep learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/fa9215f79cfdbc8b6bbac1e7cd45a445c6108db0.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Beyond Canonicalization: How Tensorial Messages Improve Equivariant Message Passing" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vErsELb7Qg
LoRA Recycle: Towards Fine-Tuning-Free Visual Foundation Model via Double-Efficient Data-Free Meta-Learning
main
Active
data-free meta-learning;few-shot classification;synthetic data
transfer learning, meta learning, and lifelong learning
3;5;5;5
2;4;4;4
2;2;3;2
2;2;2;3
3;3;3;3
4.5
3.5
2.25
2.25
3
1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weakness section, my major concerns exist in the main contribution. The proposed techniques can be also widely found in other computer vision or LoRA architecture-designed papers. The authors should clearly claim why these proposed ideas contribute to this community. \n\nThe other major concern is about the relationship between this setting and cross-domain generalization. I wonder how the domain generalization method performs on this task. It seems these method could also focuses on the metra-learning techniques.\n\nBesides, the usage of synthetic datasets could also show a clear upper bound. Thus the authors should discuss this and the relationship between using sufficient private datasets. Or in several extreme cases, what would happen, if there were several few-shot samples available? Is there a trade-off in these application scenarios?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Outstanding work on accelerating the meta-training process.\n2. Efficient data synthesis with token pruning and meta-training with sparse tokens do a great job helping the generating and meta-learning process." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed to reuse of pre-tuned LoRA techniques without the accessibility to the private data. This proposed method aims to improve the few-shot adaptability of VFMs without further fine-tuning and proposes a new data-free meta-learning framework. The experimental results on 8 datasets show the proposed method exceeds the existing literature." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Based on the Lora market, this paper doesn’t get enough contribution to the pretrained Lora reusing method. The meta-learning is widely used in generalization problems, including zero-shot or few-shot learning tasks. The major contribution is not thus interesting to me.\n2. Sparse tokens may break the potential correlation between foreground objects and background, this paper can’t simply think highly of this method without eliminating this potential adverse effect.\n3. The token pruning in the data-efficient mechanism can also be found in other lightweight designs. Besides, I hope the authors highlight why this method is distinctive, especially when we only have generated data but not customized private data. In other words, what is the key relationship between them?\n4. Several presentations are not clear or with several typos. e.g., Line 074, LoRs should be LoRAs." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Patch Masking vs. Token Reduction: Why is masking patches chosen over reducing the number of tokens in synthetic data generation? An explanation of the design choice here could clarify its benefits and relevance to the overall model.\n\n2. Typo: Line 090L has a typo: \"re-quiring\" should be corrected to \"requiring.\"" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This work introduces an interesting task to explore the potential of reusing diverse pre-tuned LoRAs, expanding the utility of these modules beyond traditional task-specific applications. \n2. The paper is well written and easy to follow.\n3. The proposed method performs well on several datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the challenge of reusing existing LoRAs for adapting a new VFMs to few-shot tasks without the need of original data or tuning. To achieve this, the authors propose data-free meta-learning framework. By distilling a meta-LoRA using synthetic data from LoRA Inversion, the framework enables VFMs to perform few-shot tasks in a single pass, similar to in-context learning in LLMs. Additionally, a double-efficient mechanism accelerates meta-training by focusing on foreground patches, enhancing both speed and performance. Extensive evaluations across several datasets demonstrate the framework’s effectiveness in both in-domain and cross-domain scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Related Work: The paper lacks a thorough discussion on data-free knowledge distillation. \n\n2. Limited Novelty: While the paper attempts to tackle a novel and interesting problem, the techniques employed to address it appear somewhat basic and lack innovation. The authors suggest inverting LoRA to obtain synthetic data, a standard approach commonly used in data-free KD literature. Additionally, the model training relies on basic meta-learning methods combined with ProtoNet, a technique widely applied in few-shot learning research. There does not appear to be any unique techniques specifically proposed for LoRA recycling. Furthermore, it seems plausible that this approach could be generalized to recycle various models, not just LoRA, without significant modification to the methodology. This raises questions about the uniqueness and specificity of the proposed solution. The authors could refer to the paper for a similar method: https://arxiv.org/pdf/2110.04545.\n\n3. Limited Evaluation: The evaluation uses relatively simple, toy datasets, which may not fully showcase the robustness or generalizability of the proposed approach. To strengthen the evaluation, I recommend including more challenging datasets, such as WILDS or DomainNet, which could better test the model's performance in diverse, real-world scenarios.\n\n4. Ablation Study: The necessity of meta-learning is unclear. An ablation study focusing on the role of meta-learning would provide valuable insights into its contribution and justify its inclusion in the model." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The idea of distilling knowledge from various pre-finetuned LoRAs to achieve generalized understanding without requiring access to the original datasets is intriguing.\n2. The authors provide clear explanations of their methods, making the paper easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the challenge of extra costs and limited resources when adapting large-scale visual models to different domains, focusing on classification. The proposed method employs meta-learning to develop a meta-LoRA capable of performing classification in a single forward pass. The authors validate their approach through experiments on several datasets in a few-shot setting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The generalizability of the proposed method is questionable, as the experiments were conducted on only eight small datasets. While out-of-domain experiments were performed, the results on the ISIC and CHESTX-RAY datasets were unsatisfactory, possibly due to limited category diversity.\n2. Although the motivation for the proposed method is compelling, the authors did not utilize a wide range of pre-finetuned LoRAs from the community. Instead, they constructed datasets from existing ones, which is not entirely convincing.\n3. More comparative methods should be included, such as CooP, CoCOOP, and PromptSRC, to provide a more comprehensive evaluation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please address all the weakness mentioned above" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.\tPaper is written in easy to understand manner. \n\n2.\tThe 5-way 1-shot accuracy improvement is impressive, thus proving the proposed methods utility. \n\n3.\tVisualization provided makes understanding synthetic dataset easy. Figure 2 and Figure 3 is really well made, makes understanding paper easy. \n\n4.\tMasking images as a means of computation efficiency is an interesting idea. As well as using the self-attention weights for pruning tokens is interesting too." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Vision foundation models often require fine-tuning with large data to perform well on a few shot tasks. Develop a real-time few-shot system with minimal data in real-time. Existing LoRA techniques require fine-tuning, which makes them unsuitable for real-time response, and a large training dataset causes instability at a small scale.\nThe work proposes: \"data free\" recycling existing LoRA modules to achieve impressive few-shot performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\t**Mentioning terms without definition**, [LINE 023] “meta-LoRA” [Line 024] “LoRA Inversion”. Maybe make them italics to show emphasis as a standard procedure. \n\n2. When comparing with existing methods, **missing work** include \na.\t*fine-tuning “Visual prompt tuning”* (Jia, Menglin, et al. \"Visual prompt tuning.\" European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022). \nb.\t*LoRA “The Balanced-Pairwise-Affinities Feature Transform”* (Shalam, Daniel, and Simon Korman. \"The Balanced-Pairwise-Affinities Feature Transform.\" Forty-first International Conference on Machine Learning (2024).) \nc. *Efficinet technique like Test-time prompt tuning* (Shu, Manli, et al. \"Test-time prompt tuning for zero-shot generalization in vision-language models.\" Advances in Neural Information Processing Systems 35 (2022): 14274-14289) \n\n\n3. **Results looks unconvincing**. Take the baselines, “LoRAHub” “MOLE” and “LoRAs Avg + Linear” their “5-way 5-shot” performance is similar to “LoRA Recycle” (inferior by 1%). These baselines are far more computationally efficient (no synthetic dataset generation and no distillation), yet give comparable performance. While LoRA Recycle performs well in the “5-way 1-shot” setting, the method doesn’t seem to highlight any special technique/method that helps in this particular result. It appears to be an unintentional benefit of the proposed method. This is more prominent in cross-domain results (Table 3). \n\n4. **Key Motivations are missing**: \n(a)\tWhy are the authors using synthetic data (“Data-free” & “avoids additional data collection”)? What’s the motivation behind it? What happens if the model uses any standard dataset like “MiniImageNet” on which these LoRA(s) are already pre-trained on (in-domain) \n(b) **Line [045] “leads to significant time overheads and increased memory usage.”** Generating synthetic dataset has a significant computation / time overhead as well. How is using synthetic dataset a better alternative than using a large scale dataset like Laion-2b as unsed in CAML? If It were to assume, synthetic images are noisy making them sparse (removing tokens) would reduce the noise and improve performance as observed in the ablation. \n(c) What's the motivation behind using LoRAs? Other methods like prompt tunning, test time augmentation, etc. are not beneficial. The technique doesn’t compare these methodologies and determining the utility of LoRAs in isolation is difficult and not well-motivated. \n(d) [Line 084] “parameter-lightweight” and “computation-efficient” [Line 085]? This approach is not lightweight, as it needs to account for the “trainable pixels” that need to be trained during LoRA inversion for generating a synthetic dataset thereby giving it a data-free status. \n(e)\t[Line 086] “architecture agnostic, enabling to recycle LoRAs with heterogeneous architectures like different ranks, as a distinct advantage over existing methods.” Is it? The synthetic dataset is generated based on a gradient from VFMs. The proposed solution is based on the choice VFM. if this is still considered as architecture agnostic, most existing fine-tuning techniques like adapters and promotes are architectural agnostic.\n\n5. **Key solutions are not solving the motivation**: The solution is to propose a real-time fine-tuning module for few-shot learning. \n(a) *Generating synthetic* data solves the data-free problem? Ablation needs to show what happens if the standard dataset like Laion-2b is used to motivate the use of synthetic dataset (and answer the data-free problem) \n(b) *Training in retrieval-based technique* (Line[228] Synthetic few-shot task construction): Are authors claiming retrieval-based techniques help in Line[036] \"few-shot tasks without the necessity for fine-tuning,\"" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Is it feasible to reuse diverse pre-tuned LoRAs without accessing their private training data, to enhance the few-shot adaptability of Vision Foundation Models without requiring further fine-tuning?" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024lora,\ntitle={Lo{RA} Recycle: Towards Fine-Tuning-Free Visual Foundation Model via Double-Efficient Data-Free Meta-Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vErsELb7Qg},\nnote={under review}\n}" }, "abstract": { "value": "Large Language Models (LLMs) such as ChatGPT can efficiently adapt to few-shot tasks without fine-tuning, making them ideal for data-limited applications requiring real-time responses. However, this adaptability has not yet been replicated in current Visual Foundation Models (VFMs), which require explicit fine-tuning with sufficient tuning data. Low-Rank Adaptation (LoRA), an effective fine-tuning approach, adapts VFMs to specific tasks by updating extra lightweight modules. Thanks to its modularity, users can upload locally tuned LoRAs to public repositories without exposing private training data. In this paper, we explore the potential of reusing diverse pre-tuned LoRAs without accessing their private training data, to improve the few-shot adaptability of VFMs without requiring further fine-tuning. To achieve this, we propose a data-free meta-learning framework named LoRA Recycle, which distills a meta-LoRA from diverse pre-tuned LoRAs using synthetic data generated via LoRA Inversion. The VFM, once equipped with the meta-LoRA, is empowered to solve new few-shot tasks in a single forward pass without further fine-tuning, akin to the in-context learning of LLMs. To further enhance efficiency, we propose a double-efficient mechanism that uses only the foreground patches and prunes background patches in the synthetic data, significantly accelerating the meta-training process while maintaining or even improving performance. Comprehensive experiments across eight datasets within both in- and cross-domain scenarios verify the superiority of our framework." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "data-free meta-learning", "few-shot classification", "synthetic data" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/7208499b9a76a3c6f9615b9522cf7774fc5813ff.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/6ed4115bc1586c3662a841897bed5c9aa9a69224.zip" }, "title": { "value": "LoRA Recycle: Towards Fine-Tuning-Free Visual Foundation Model via Double-Efficient Data-Free Meta-Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vEtDApqkNR
MambaTS: Improved Selective State Space Models for Long-term Time Series Forecasting
main
Active
Time Series Forcasting; State Space Model
applications to computer vision, audio, language, and other modalities
3;5;5;6;8
4;4;2;3;3
1;2;2;3;3
2;2;2;3;3
3;1;2;2;3
5.4
3.2
2.2
2.4
2.2
-0.394771
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I don't see why a random permutation is equivalent to a random walk. Line 324 says \"K − 1 transition tuples ${(v_1, v_2),(v_2, v_3), · · ·(v_{K−1}, v_K)}$ are derived\". I wonder what prevent the authors from deriving K(K-1)/2 tuples, so that each $(v_i, v_j), i<j$ is included?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper propose a strategy to apply Mamba to to multivariate ts forecasting and achieves empirical result comparable to SOTA." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduce MambaTS, a new time series forecasting model based on selective state space models. In order to tackle multivariate forecasting, the timeseries patches of each variable is unrolled in a certain order to form a single sequence. One key innovation of the paper is a method for estimating the causal relationship between variables during training via random walk without return." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proof in Proposition 2 does not make sense to me. I am not sure the whole concept of random walk on a casual graph with certain cost is well defined in the paper.\n2. The proposed method claims to leverage the causal dependency between the variables and thus is more suitable in the multivariate setting. However, it does not seems to have a large advantage over chanel independent PatchTST, which is univariate forecasting method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The paper offers a well-reasoned and innovative approach to time series forecasting, with theoretically sound propositions and a practical methodology that balances computational efficiency with modeling accuracy. While the heuristic reliance on VAST and scalability issues in dense graphs present limitations, the model’s strengths in efficiency, adaptability, and architectural design make it a valuable contribution. MambaTS is especially promising for high-dimensional and complex time series data, though further work is recommended to address heuristic dependency and enhance robustness in varied causal structures. Overall, it's a solid and innovative work on time-series forecasting, effectively incorporating causality in a computationally efficient and scalable manner." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- MambaTS reduces computational complexity from quadratic O(K^2) to linear O(K) by leveraging a topologically ordered linear scan, making it suitable for high-dimensional time series data.\n- VAST enhances adaptability by inferring causal relationships in the absence of explicit causal graphs, using random walks to approximate dependencies and mitigate the need for exhaustive pairwise calculations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces MambaTS, an architecture for long-term time series forecasting that models global dependencies efficiently with a linear scan, avoiding the computational challenges of self-attention. The Variable-Aware Scan along Time (VAST) mechanism dynamically infers causal relationships among variables using random walks and determines an optimal scanning order through heuristic path decoding. This design achieves scalability and adaptability, particularly for complex, high-dimensional datasets with unknown causal structures." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Reliance on heuristic optimization for scanning order yields sub-optimality:\n- The variable-aware san along time (VAST) employs the asymmetric traveling salesman problem (ATSP) to determine the optimal scanning order, relying on heuristics like simulated annealing to address its NP-hard nature. Although heuristics provide feasible solutions, this dependency introduces inconsistency, as different approximations may affect the accuracy of variable ordering (in case of complex, dense inter-variable connections)\n- Extra experiments on alternative heuristic approaches such as genetic algorithms (that are powerful in navigating NP-hard problems) could reveal a more stable and efficient approach. In the same vein, additional experiments measure how different heuristic methods affect the resulting scanning order and, subsequently, forecasting accuracy. This can help users determine if any heuristic consistently produces a favorable scanning order.\n\nConvergence guarantee or confidence interval is not covered in causal estimation which lacks usability:\n- Proposition 2 lacks formal guarantees for convergence speed, raising questions about the robustness of causality inference in finite settings. Without clear bounds on the number of walks required, the approach may yield only approximate estimates, especially when practical constraints limit the number of walks. This limitation affects the consistency and reproducibility of causal estimation results, as reliance on empirical averaging may not ensure reliable causal inference across varied dataset structures. \n- It might be helpful to introduce a stopping rule based on convergence metrics (e.g. average change in transition costs), or introduce confidence intervals on causality estimates to users to give insight into the stability of causal inferences under finite computational budgets, where both suggestions seem to be beyond the scope of this study. I hope the authors consider usability in the future works." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How is the patch length chosen, and does it vary across datasets?\n2. Could the authors clarify the random walk process and the number of epochs used in variable scanning?\n3. Why were benchmarks ETTh1 and ETTm1 not included?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. LTSF presents a compelling and complex challenge.\n2. The experiments are thorough but still lack some essential details." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes MambaTS, a selective state-space model for long-term time series forecasting (LTSF) that addresses the computational limitations of Transformers by leveraging causal relationships across variables and time with a single linear scan." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Lack of Experimental Details:** Important implementation details are missing, such as patch length, the value of beta in Equation 7, and whether the random walk on variables is conducted K-1 times per epoch (meaning k-1 more training time cost than one epoch).\n2. **Efficiency Concerns:** Theoretical complexity analysis in Table 5 lacks practical runtime comparisons. Given that MambaTS requires K-1 iterations to estimate causal relationships, its efficiency is questionable.\n3. **Incomplete Ablation Studies:** The paper introduces the TMB (with dropout replacing the original convolution), but no ablation study compares TMB and the original Mamba block, leaving its impact on performance unclear.\n4. **Limited Explanation in Variable-Aware Scanning:** Section 5.2 does not clearly explain whether K-1 transitions are sufficient to estimate all variable orders, or if consistency (e.g., v1 always preceding vk) is assumed.\n5. **Limited Benchmarking:** Two commonly used datasets (ETTh1, ETTm1) are missing, which reduces the generalizability of the results.\n6. **Code Availability:** No code is provided, limiting reproducibility.\n7. **Unpersuasive SOTA Claims:** Results in Table 2 are questionable. For example, our reimplementation of PatchTST (using official configurations) achieved better results than the reported MambaTS performance on ETTm2 (input length 720). Specifically:\n - ETTm2_720_96.log: 0.1632, 0.2555\n - ETTm2_720_192.log: 0.2167, 0.2942\n - ETTm2_720_336.log: 0.2679, 0.3282\n - ETTm2_720_720.log: 0.3521, 0.3798\n\n These results suggest MambaTS may not definitively outperform all baselines, especially as no code is available for direct comparison.\n8. **Notation Issue:** The meaning of \\( I \\) in Equation 7 is unclear.\n9. **Inference Process Detail:** Section 5.2 lacks details on the inference process for Variable-Aware Scan Along Time." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The introduction of VAST and the use of causal graphs for modeling dependencies offers a unique solution to efficiently process long-term dependencies in time series data with linear complexity.\n2. The model is tested across various public datasets, demonstrating superior performance compared to existing state-of-the-art models. This not only validates the efficacy of MambaTS but also showcases its versatility in handling different types of time series data.\n3. MambaTS significantly reduces the computational cost traditionally associated with long-range forecasting models like Transformers by avoiding the quadratic complexity of the self-attention mechanism." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces MambaTS, an improved selective state space model for long-term time series forecasting. The model leverages a novel method for variable-aware scanning along time (VAST) to model global dependencies in a time series with variable missing rates and different intervals. By utilizing a combination of causal graphs and shortest path solutions, MambaTS addresses the limitations of previous Transformer-based models which often struggle with high computational costs and inefficient handling of long-range dependencies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The effectiveness of the model heavily depends on the accuracy of the causal graphs. Incorrect or incomplete causal relationships can lead to suboptimal forecasting results, which the paper does not extensively address in terms of robustness against poor graph structure\n2. While the model shows high efficiency and effectiveness, the paper lacks a thorough discussion on scalability, especially in scenarios with exceedingly large datasets or highly complex variable relationships.\n3. There is a need for a comparison of the model’s performance with other SOTA methods, such as Onefitsall, TimeLLM etc." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. In proposition 1, the assumption is that the causal graph exists. What if it doesn't exist? And is there any support on the random walk without return is a promising approach to estimate causal links? \n2. What is the definition of cost C and how we can get/set it empirically (e.g., the cost from node i to node j)?\n3. Proposition 2 indicates that theoretically the causal relationships can be estimated without infinite random walks with return. I would like to ask how many walks are required empirically. And also the time spent? \n4. In Eq (6), how we can get the p^{(0)}? \n5. In my view, another major difference between MambaTS and iTransformer is that MambaTS model the dependencies on both time and variables, while iTransformer mainly on variables. I would like to know how this contributes to eventual performance. Because UniTST [1] is also modeling the dependencies on both time and variables dimensions, but with Transformer architecture. How does MambaTS compare with UniTST? \n\nReference:\n\n[1] UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. It is interesting to see another new work on mamba for time series forecasting. In my view, some properties of mamba are fit for time series and it's an interesting direction to explore more. \n2. The authors propose several designs to tailor mamba for time series application, which has its merits." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents MambaTS, an LTSF model addressing Transformers' self-attention complexity and bias by using causal relationships for global dependency modeling. The author designs variable-aware scan along time to get variable causal relationships and also Temporal Mamba Block to avoid causal convolution. The experimental results show that MambaTS outperforms several state-of-the-art models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The clarity of the paper needs to be improved. In some parts, I cannot fully understand, such as the cost of a random walk without return. Also what is the cost from node i to node j, and how we can get this in the first iteration\n2. Some claims have no support/evidence. For example, the authors mention that the random walk without return is a promising approach to estimate causal links. I would like to know the reason, e.g., any citations/proofs. \n3. The experiments seem not comprehensive. The authors only compare MambaTS with 7 baselines. There are a few more after iTransformer, which are worth to be compared. E.g., ModernTCN [1], UniTST [2], TSLANet [3]. \n\n\nReferences:\n\n[1] ModernTCN: A Modern Pure Convolution Structure for General Time Series Analysis. \n\n[2] UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting. \n\n[3] TSLANet: Rethinking Transformers for Time Series Representation Learning" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper questions the necessity of self-attention in long-term sequence forecasting and introduces MambaTS, which models global dependencies across time and variables by leveraging causal relationships through a single linear scan." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024mambats,\ntitle={Mamba{TS}: Improved Selective State Space Models for Long-term Time Series Forecasting},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vEtDApqkNR},\nnote={under review}\n}" }, "abstract": { "value": "In recent years, Transformers have become the de-facto architecture for long-term sequence forecasting (LTSF), yet they face challenges associated with the self-attention mechanism, including quadratic complexity and permutation invariant bias. This raises an important question: \\emph{do we truly need the self-attention mechanism to establish long-range dependencies in LTSF?} Recognizing the significance of causal relationships in multivariate LTSF, we propose MambaTS, which leverages causal relationships to model global dependencies across time and variables through a single linear scan. However, causal graphs are often unknown. To address this, we introduce variable-aware scan along time (VAST), which dynamically discovers variable relationships during training and decodes the optimal variable scan order by solving the shortest path visiting all nodes problem during inference. MambaTS employs the latest Mamba model as its backbone. We suggest that the causal convolution in Mamba is unnecessary due to the presence of independent variables, leading to the development of the Temporal Mamba Block (TMB). To mitigate model overfitting, we further incorporate a dropout mechanism for selective parameters in TMB. Extensive experiments conducted on eight public datasets demonstrate that MambaTS achieves new state-of-the-art performance." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Time Series Forcasting; State Space Model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5c718ced23f3e9acc279b46ef0b41adf30d7a0d4.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/21a7b841879f074f420f870e3299c8d98297a5a5.zip" }, "title": { "value": "MambaTS: Improved Selective State Space Models for Long-term Time Series Forecasting" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vF4RhEPGtb
Typography Leads Semantic Diversifying: Amplifying Adversarial Transferability across Multimodal Large Language Models
main
Active
Adversarial Transferability; Multimodal Large Language Models; Data Augmentation
applications to computer vision, audio, language, and other modalities
3;3;5;6
4;4;3;4
2;2;2;3
2;2;2;3
2;1;3;2
4.25
3.75
2.25
2.25
2
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please check the weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper provides comprehensive evaluation on various surrogate model architectures, such as BLIP2, InstructBLIP, LLaVA, MiniGPT4. \n\n2. Provides different attack baselines, which makes the evaluation stronger. \n\n3. The paper is easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigate the threat of cross-MLLMs adversarial transferability. The paper proposes a boosting method, TATM, leveraging the strength of information diversity involved in the adversarial generation process and editing across the modality information." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Does the proposed attack method resistant to the defense mechanism? Are there any analysis or the defense baselines evaluation?\n\n2. The selected tasks are generation-based tasks. Will the attack also works on the classification tasks?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "Yes, Unprofessional behaviors (e.g., unprofessional exchange between authors and reviewers)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Although the authors have validated the proposed method on multiple models, the experiments are limited to fixed-size models and scenarios. Expanding the diversity of experiments is recommended, such as testing larger models like Qwen2-VL-8B, CogVLM-17B, and Yi-VL-34B, to further assess the method's effectiveness.\n\n2. What is the impact of the perturbation budget on the transferability of adversarial examples in applications like \"Harmful Word Insertion\" when evaluated on larger models? Generally, larger MLLMs exhibit more robust visual understanding.\n\n3. \"On Evaluating Adversarial Robustness of Large Vision-Language Models\" is likely the first work investigating the adversarial robustness of MLLMs. Could this method be used as a baseline?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. the authors propose an innovative approach to enhance transferability by leveraging cross-modal data augmentation and semantic diversification through typography-based adversarial examples.\n2. The introduction of the Multi-semantic Angular Deviation Score (MADs) as a metric for quantifying information diversity also reflects the technical rigor of the study." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the transferability of adversarial examples across MLLMs. Although MLLMs excel in cross-modal interaction and comprehension, they remain vulnerable to transferable adversarial attacks, posing significant real-world risks. Leveraging two key factors—information diversity in adversarial generation and cross-modal editing—the authors propose the Typography Augment Transferability Method (TATM), which enhances adversarial transferability across MLLMs. Experiments demonstrate TATM’s effectiveness in applications such as \"Harmful Word Insertion\" and \"Important Information Protection." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There is a lack of in-depth analysis on the transferability of adversarial examples. For instance, to further understand how the targeted adversarial example influences response generation, the authors can compute the relevancy score of image patches related to the input question using GradCAM to obtain a visual explanation for both clean and adversarial images.\n2. Reference missing, such as \"On Evaluating Adversarial Robustness of Large Vision-Language Models\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The paper does not appear to address ethical considerations, particularly given the focus on transferability, which could have real-world implications for production models." }, "flag_for_ethics_review": { "value": [ "Yes, Privacy, security and safety", "Yes, Potentially harmful insights, methodologies and applications" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please refer to the weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The transferability of adversarial attacks across MLLMs is a timely and essential area of study, especially as MLLMs are increasingly integrated into commercial products.\n\n- Comprehensive experiments involving a substantial number of models have been conducted, which is critical for advancing research in transferability studies." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates vulnerabilities in Multimodal Large Language Models to transferable adversarial attacks. The authors introduce the Typography Augment Transferability Method (TATM), which uses typographic augmentation and cross-modal editing to enhance adversarial transferability across models. TATM proved highly effective in tasks like harmful word insertion and information protection." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper requires significant proofreading; several errors disrupt readability and make the reading process frustrating.\nThe Abstract contains grammatical issues, for example: \"Therefore, this paper as the first step to ...\" should likely read \"this paper serves as the ...\". Also, \"Furthermore, leveraging two key factors that influence transferability performance: 1) The strength of information diversity involved in the adversarial generation process; 2) Editing across vision-language modality information. We propose a boosting method ...\" lacks fluency.\nIn the Introduction, citations, particularly those listed in groups, should be enclosed in brackets for clarity.\nIn the Background section, adequate spacing is needed between method names and author names, e.g., \"Projected Gradient Descent (PGD)Madry et al. (2017)\" should read \"Projected Gradient Descent (PGD) Madry et al. (2017).\"\n\n- Numerous acronyms and newly introduced terms make the paper challenging to follow.\n\n- Transferability studies should ideally test transferability on black-box production models, such as GPT-4, Gemini, and real deployed systems.\n\n- The method’s dependence on typographic augmentation may restrict its applicability to certain scenarios or datasets. Exploring other forms of semantic augmentation could improve its generalizability and broaden potential applications." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Refer to Weakness" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The strengths of this paper include:\n- The authors' proposal of a white-box attack method to enhance the transferability of adversarial examples. They conducted experiments on various tasks, such as \"Harmful Word Insertion\" and \"Important Information Protection,\" thoroughly analyzing the performance of different methods across various MLLMs, including those with fixed vision encoders and cross-vision encoders. \n\n- The introduction of the Multi-semantic Angular Deviation Score (MADS) contributes to improving the interpretability of adversarial examples, offering a valuable tool for understanding their behavior." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the security vulnerabilities of Multimodal Large Language Models (MLLMs), with a focus on the transferability of adversarial examples. The authors introduce the Multi-semantic Angular Deviation Score (MADS), a quantitative metric for analyzing the adversarial transferability of different image samples. Additionally, they propose the Typography Augment Transferability Method (TATM), which enhances adversarial transferability by leveraging information diversity and cross-modal editing. Through experiments, the authors demonstrate the effectiveness of TATM in improving adversarial transferability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The weaknesses of this work include:\n- The authors introduce two real-world applications: \"Harmful Word Insertion\" and \"Important Information Protection.\" However, the paper does not provide a clear explanation of the specific setups for these applications. Both Figure 1 and the accompanying text fail to clarify these aspects. Additionally, the authors do not justify the choice of \"Suicide\" and \"Unknown\" as target outputs, leaving their rationale unclear. Moreover, the explanations for the baseline methods, including DIM, BC, SIM, SIA, and TIM, are also insufficiently detailed.\n\n- The proposed adversarial example generation method is essentially a white-box attack, where the image is initialized using typographic words. However, the advantage of this initialization is not demonstrated. The authors neglect to provide ablation studies, such as comparing adversarial training using original image for initialization or image patch for initialization.\n\n- The authors' exploration of the different word types of typographic words embedded in images lacks clear justification. Intuitively, if the target output is \"suicide,\" the typographic words used in the images should be contextually relevant words or related images. Without such relevance, the significance of testing transferability across arbitrary word types becomes questionable.\n\n- The paper also suffers from numerous typos, indicating a need for significant improvement in writing quality. For instance:\n\n1. Line 124: \"Furthermore, data-augmentation methods Data augmentation has received more attention because of the ease and efficiency of implementation.\"\n2. Line 195: Multi-semantic Angular Deviation Score (MADS), and Line 206: \"we present the Mean Absolute Deviation Scores (MADS)\"—the inconsistent use of different full forms for the same acronym is unacceptable.\n3. In Algorithm 1, there are multiple issues with inconsistent capitalization, symbol notation, and typographical errors, including \"TypoT.\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024typography,\ntitle={Typography Leads Semantic Diversifying: Amplifying Adversarial Transferability across Multimodal Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vF4RhEPGtb},\nnote={under review}\n}" }, "abstract": { "value": "Recently, Multimodal Large Language Models (MLLMs) achieve remarkable performance in numerous zero-shot tasks due to their outstanding cross-modal interaction and comprehension abilities. However, MLLMs are found to still be vulnerable to human-imperceptible adversarial examples. In the exploration of security vulnerabilities in real-world scenarios, transferability, which can achieve cross-model impact, is considered the greatest threat posed by adversarial examples. However, there is currently no systematic research on the threat of cross-MLLMs adversarial transferability. Therefore, this paper as the first step to provide a comprehensive evaluation of the transferability of adversarial examples generated by various MLLMs. Furthermore, leveraging two key factors that influence transferability performance: 1) The strength of information diversity involved in the adversarial generation process; 2) Editing across vision-language modality information. We propose a boosting method called Typography Augment Transferability Method (TATM) to investigate the adversarial transferability performance across MLLMs further. Through extensive experimental validation, our TATM demonstrates exceptional performance in real-world applications of \"Harmful Word Insertion\" and \"Important Information Protection.\"" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Adversarial Transferability; Multimodal Large Language Models; Data Augmentation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/8fe31fd7f939dee1f471980dec7e364b73df2c53.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/526a7da29d55d496e31c0fdfb90503f9693cc58a.zip" }, "title": { "value": "Typography Leads Semantic Diversifying: Amplifying Adversarial Transferability across Multimodal Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vFVjJsy3PG
Geometric Representation Condition Improves Equivariant Molecule Generation
main
Active
molecule generation;equivariant generative models;representation;geometric deep learning;diffusion models
learning on graphs and other geometries & topologies
5;5;5;5;8
4;3;3;3;3
3;4;3;2;3
3;3;2;2;4
3;4;3;3;3
5.6
3.2
3
2.8
3.2
-0.25
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The key strength of GeoRCG lies in its innovative use of geometric representations to condition molecular generation. By transforming the generation problem into a two-stage process—first generating a geometric representation and then generating the molecule conditioned on this representation—the paper introduces an effective way to simplify the complex task of molecular generation. This approach addresses major challenges like handling 3D geometric symmetries and provides a significant improvement over existing methods that attempt to directly learn molecular distributions.\n\nThe clear and structured presentation of the methodology, supported by well-executed empirical evaluations and visual explanations, adds to the clarity and accessibility of the paper. GeoRCG's advancements could have a significant impact on drug discovery and material design, highlighting its importance for both research and practical applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper, titled \"Geometric Representation Condition Improves Equivariant Molecule Generation\" (GeoRCG), presents a novel approach to improving molecular generative models by incorporating geometric representation conditions. The GeoRCG framework divides the molecule generation process into two stages: first, generating an informative geometric representation; second, generating a molecule conditioned on this representation.\n\nThe core idea is to first generate a compact geometric representation of a molecule using a pre-trained geometric encoder. This representation captures essential information about molecular structure without the complexity associated with 3D symmetries, making the generation task simpler and more effective. Leveraging this representation, the second stage uses a molecule generator to produce the final molecule. The framework employs EDM as the base generator and shows significant improvements in both unconditional and conditional molecule generation tasks. Specifically, GeoRCG achieves an average 31% performance gain over state-of-the-art methods on challenging conditional generation tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the paper emphasizes the use of geometric representations to simplify the generation task, there is insufficient analysis of how different pre-trained encoders impact the overall quality of the generated molecules. The choice of pre-trained encoders (UniMol and Frad) is central to the approach, but the authors do not explore how variations in the pre-training dataset or encoder architecture affect the representations. Conducting a more comprehensive analysis, such as comparing multiple pre-trained models trained on different datasets or architectures, would help clarify the impact of representation quality and improve confidence in the method’s robustness.\n\n\nThe representation generator aims to remove symmetries such as O(3) and S(N), but the impact of symmetry removal on downstream tasks is not thoroughly analyzed. Specifically, it would be beneficial to explore whether there are specific symmetries that contribute positively to certain molecular properties or whether removing all symmetries has unintended negative effects on some downstream applications. Conducting ablation studies that selectively preserve certain symmetry properties could offer insights into how symmetry affects molecule generation and provide a more nuanced understanding of its role.\n\nIn the conditional generation setting, the paper discusses training the representation generator on (molecule, property) pairs. However, this strategy is limited to simple properties like HOMO-LUMO gap, polarizability, etc., and there is no clear extension for complex properties such as molecular binding affinity or ADMET (Absorption, Distribution, Metabolism, Excretion, and Toxicity) properties. Such properties typically require more context or knowledge beyond geometric structure alone. Addressing this limitation, either by discussing possible extensions or incorporating more sophisticated conditioning mechanisms (e.g., using multi-modal data such as 3D structure and protein targets), would make GeoRCG more applicable to real-world drug discovery problems." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Could you add bond length and bond angle metric comparisons for the GEOM Drugs dataset? Providing these metrics would offer a more complete evaluation of the model’s performance on larger and more complex datasets.\n\nWould you consider comparing your model with others like JODO, EQGATDiff, or SemlaFlow, which generate the full graph including bonds? A comparison with these models, especially SemlaFlow due to its speed and reduced number of steps, could be particularly beneficial in highlighting the advantages and limitations of your approach.\n\nIn the paper, there is no reference to the GEOM paper, and there is a claim: “Crucially, many structures in GEOM-DRUG lack the equilibrium conditions necessary for pre-training methods that enable effective learning of force fields,” which could be misleading. All GEOM Drugs molecules have optimized geometries with respect to GFN2-xTB energy calculations. Could you reconsider this statement? Clarifying this point and accurately referencing the GEOM dataset would enhance the credibility and accuracy of your pape" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Improved Speed through Latent Diffusion: Latent diffusion has great potential to enhance the speed of 3D molecule generation. In this paper, the authors demonstrated that they were able to reduce the number of diffusion steps from the commonly used 1000 or 500 to just 100 without any performance drop. Performing diffusion in a simpler latent representation appears to be a highly effective approach.\n\nBetter Bond Length and Angle Distributions on QM9 Dataset: The proposed model generates bond length and bond angle distributions that more closely resemble those of the QM9 dataset. This is a reasonable metric for assessing the quality of generated 3D structures and indicates an improvement over previous methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a theoretical foundation and specific implementation of latent diffusion for molecular generation. Instead of directly operating with molecular graphs (e.g., continuous Euclidean coordinates and categorical atom types), the method proposes projecting molecules into a latent space and using latent representations that are O(3) and SO(3) invariant. Diffusion is performed in this simplified latent space, and then the molecules are reprojected to predict their structures. This approach reduces computational costs, resulting in a denoising neural network that is much smaller and simpler, as well as reducing the number of steps required for diffusion." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Limited 3D Metrics on Larger Datasets: The only reported 3D metric for the more realistic and larger GEOM Drugs dataset is atom stability. Since the atom stability is only 0.86 for GEOM Drugs itself, this raises questions about the reliability of the metric. A more comprehensive and accurate comparison is required to fully assess the model’s performance on larger datasets.\n\nOverlooking Models That Do Not Rely on External Software: The paper states that models such as MiDi and LDM3DG use domain knowledge through Open Babel, which gives them advantages. However, there are models like JODO, EQGATDiff, and SemlaFlow that directly predict bonds without relying on external software. Including these models in comparisons would provide a more comprehensive evaluation and highlight the strengths and weaknesses of the proposed method.\n\nQuestionable Reliance on Lookup Tables: The reliance on a lookup table for bond lengths is questionable. Depending on the molecular configuration and the specific energy calculation method used (which is GFN2-xTB for GEOM Drugs), bond lengths can vary within a 10% interval. This variability suggests that a static lookup table may not accurately capture the nuances of bond lengths across different molecules.\n\nLack of Comparison with Faster Models: While the method aims for faster molecule generation by reducing the number of diffusion steps, previous models have already utilized flow matching to reduce steps to 200 (EquiFM) or even 20 (SemlaFlow). Including these models in the comparison would strengthen the research by providing context for the improvements and demonstrating how the proposed method stands relative to existing fast-generation techniques." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Is it possible to get (during revisions) a more extensive effort to match or exceed SOTA for the DRUG dataset? This would raise my reviewer score. At the moment, I think this paper is a technical but not revolutionary improvement and may belong to a journal or another venue rather than ICLR. If the improvement over larger molecules is demonstrated, I think it may be more of interest to this community." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "This paper is well-written and easy to read. The authors present their results clearly and clearly, and the method is also well described." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an intermediate representation approach to find better property conditioning for molecular generation. The authors show success for properties such as polarizability (\\alpha) over the QM9 data set, but other state-of-the-art models outperform the model on the more extensive dataset DRUG." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weakness of this approach is that the performance in the DRUG dataset does not beat SOTA. QM9 is a great research-level dataset for small molecules, but DRUG is industrially relevant and has more realistic molecules." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How does low-temperature sampling impact prior DRUGS baselines, as it is used in Table 1? Methods mentioned, such as Chroma, suggest that it can have a significant impact.\n- How does this same method apply to other models beyond EDM given it is a general framework?\n- How does the performance vary as a function of sampling steps for DRUGs and their respective benchmarks?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Overall GeoRCG demonstrates how leveraging powerful pretrained molecule representations to condition generative models can improve results for QM9 molecules.\n- Strong evidence shows that guidance mixed with low-temperature sampling improves QM9 results in Figure 4.\n- By conditioning the cheap representation generator on the property of interest it pushes the complexity of conditional generation to be more managible." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce GeoRCG, a general framework to enhance conditional molecule generative models by including geometric representations. GeoRCG splits the generation process into two parts the first being to create an informative geometric representation followed by generating a molecule conditioned on this representation. Using GeoRCG they can improve upon base EDM for QM9 and GEOM-DRUGs datasets, reducing the number of inference steps from 1000 to 100." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- There is an overemphasis on QM9 in the benchmarks, with little attention to the more challenging GEOM DRUGS which more accurately represents drug-like molecules. Molecule stability, connectivity, and other metrics are included in MIDI but have not been reported. This is important, especially for a method that demonstrates improvement over base EDM for QM9, since for Drugs EDM obtains only 5.5% molecule stability and 40.3% after OpenBabel (numbers taken from MIDI).\n Connectivity is not reported for any method, which is important for 3D molecule generation. If the molecule is not connected, RDKit can often still parse it, but it is not a single-molecule structure. From MIDI EDM + OpenBabel, the result is only 41.4%, which is quite low.\n\nOverall, it's hard to understand what is translatable as a general method for other biological tasks since QM9 is a toy task." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Please define EDM in your abstract.\n- The writers vaguely refer to the improved “quality” of the generated molecules as the first achievement of this method. This is too vague—what does quality mean here? \n- Again, the second bullet talks about model performance, citing a 31% increase. What is this “performance”? Please cite the metric and the benchmark here.\n- Last bullet—reduce the number of diffusion steps by what percent on average? Please be specific here at this point in the paper." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Good background, appreciate the previous work review in the appendix.\n\nAppreciate the heat maps in Figure 4, good demonstration of these parameters allowing a scientist to specify the tradeoff between properties depending on application." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel framework to improve 3D molecular generation by integrating certain geometric representation conditions into the molecular generation process, and then conditioning on these representations to generate a molecule." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are no glaring weaknesses of this paper. All choices made have been clearly communicated and motivated. Important limitations of the method are clearly outlined in the Conclusions section of the paper.\n\nSmall change, but please change the colors used in the tables for red/green colorblind readers." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024geometric,\ntitle={Geometric Representation Condition Improves Equivariant Molecule Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vFVjJsy3PG},\nnote={under review}\n}" }, "abstract": { "value": "Recent advancements in molecular generative models have demonstrated substantial potential in accelerating scientific discovery, particularly in drug design. However, these models often face challenges in generating high-quality molecules, especially in conditional scenarios where specific molecular properties must be satisfied. In this work, we introduce GeoRCG, a general framework to enhance the performance of molecular generative models by integrating geometric representation conditions. We decompose the molecule generation process into two stages: first, generating an informative geometric representation; second, generating a molecule conditioned on the representation. Compared to directly generating a molecule, the relatively easy-to-generate representation in the first-stage guides the second-stage generation to reach a high-quality molecule in a more goal-oriented and much faster way. Leveraging EDM as the base generator, we observe significant quality improvements in unconditional molecule generation on the widely-used QM9 and GEOM-DRUG datasets. More notably, in the challenging conditional molecular generation task, our framework achieves an average 31\\% performance improvement over state-of-the-art approaches, highlighting the superiority of conditioning on semantically rich geometric representations over conditioning on individual property values as in previous approaches. Furthermore, we show that, with such representation guidance, the number of diffusion steps can be reduced to as small as 100 while maintaining superior generation quality than that achieved with 1,000 steps, thereby significantly accelerating the generation process." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "molecule generation", "equivariant generative models", "representation", "geometric deep learning", "diffusion models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/8f435d71c0a1378674f45371ce3ec141ed594f84.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/8de11f2b3ca6580ee8d2d9edc5e3e089451917bc.zip" }, "title": { "value": "Geometric Representation Condition Improves Equivariant Molecule Generation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vFanHFE4Qv
Neuron Platonic Intrinsic Representation From Dynamics Using Contrastive learning
main
Active
representation learning;biology;neuroscience;contrastive learning
applications to neuroscience & cognitive science
3;5;6;6
3;4;3;5
1;2;2;3
2;3;3;3
2;2;2;2
5
3.75
2
2.75
2
0.492366
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Line 54 and 55: \n- Although VICReg does not explicitly use negative pairs like other contrastive methods, it still indirectly promotes the separation of different samples through the variance and covariance regularization terms. The variance regularization ensures that the representations have sufficient spread, meaning that for dissimilar samples, their feature representations are less likely to collapse into a small region of the embedding space. The covariance regularization helps ensure that features are not highly correlated, promoting more distinct and diverse representations. VICReg relies on a more subtle approach, achieving the desired separation through variance and covariance regularization.[1] \n\nFigure 1: \n- In this paper, we set the optimization objective for obtaining time-invariant intrinsic representations of neurons as follows: clips(segments) from the same neuron should have a higher average similarity than clips from different neurons. Figure 1 illustrates how this objective can be achieved using a contrastive learning approach, where different clips from the same neuron are treated as positive pairs and clips from different neurons are treated as negative pairs. This is the main idea conveyed by Figure 1.\nHowever, when selecting a specific contrastive learning method, we realized that directly optimizing for the separation of clips from different neurons may be too rigid (as different neurons do not necessarily represent dissimilarity). Therefore, we chose to implement VICReg, which optimizes using only positive pairs, but indirectly separates dissimilar samples through regularization terms. We have revised the caption of the figure to explain more detail.\n\nLine 134: \n- In this paper, we are not learning representations of neuron populations, nor are we performing dimensionality reduction on neuronal population activity. Instead, we aim to learn a time-invariant intrinsic representation for each individual neuron. Each neuron is considered separately, and the activity information from the remaining neurons in the population is treated as peripheral information (equation 1: X_st) for that specific neuron, which acts as auxiliary variables. The peripheral information for each neuron is processed and encoded using CEBRA.\n\nLine 149: \n- If a neuron has data from multiple sessions, we will select segments of 512 time points from different sessions as positive pairs for that neuron. If a neuron only has data from a single session, we will randomly select non-overlapping segments, with lengths randomly ranging from 200 to 512 time points, as positive pairs for that neuron. During implementation, we have ensured the use of an algorithm that prevents overlap between these segments.\n\nLine 155:\n- A session refers to a single stage or individual experiment in a neuroscience study where data is collected. If you want to know more about session, please read this link[2]. The input dimension of the model is the same as the length of the segment. For example, if there are 300 time points, the first session's Xse would be represented as [1, for i in range(300)].\n\nLine 161: \n- Based on the previous question, if a neuron only has data from a single session, we will randomly select non-overlapping segments of lengths ranging from 200 to 512 time points as positive pairs for that neuron. For these segments of varying lengths, we ultimately need to map them to the same dimensionality. Adaptive average pooling is a suitable tool for this task, as it does not require explicitly specifying a fixed pooling window size. Instead, we can set a consistent output size, and the pooling operation will automatically adjust to accommodate the varying segment lengths. Adaptive average pooling is capable of handling inputs of different lengths and extracting important feature information, mapping them to a consistent dimensionality.\n\nLine 202: \n- It is set to a fixed value of 1, as we referenced in the original VICReg paper and other works that utilize VICReg, which all adopt this setting. The choice of the target value is an interesting topic worth further exploration, especially in terms of its impact on negative pairs. However, this would become the focus of another paper dedicated to contrastive learning.\n\nFigure 4: \n- We replaced the original bar charts with a table to present more detailed information, including the precision, recall, and F1 scores details. \n\n[1] Bardes, Adrien, Jean Ponce, and Yann LeCun. \"Vicreg: Variance-invariance-covariance regularization for self-supervised learning.\" arXiv preprint arXiv:2105.04906 (2021).\n[2]https://allensdk.readthedocs.io/en/latest/visual_behavior_optical_physiology.html#session-structure" }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Line 54 and 55: the paper is motivated to make segments of activity from the same neuron or similar neurons to converge, while dissimilar neurons diverge. However, VICReg only uses positive pairs, and adds regularization to prevent representation collapse. How does using VICReg help push dissimilar neurons apart according to the motivation? \n* Figure 1: description in the texts and accompanying caption to understand this figure are missing. The figure does illustrate negative pairs, however, VICReg does not use negative pairs as mentioned above. How are negative pairs processed by the NeurPIR model?\n* Line 134: the goal was to learn intrinsic neuronal representations on neuron population data, but it does not seem that activity of other neurons are used in the model (equations 1 to 4). How does the model use population dynamics to learn neuronal representations?\n* Line 149: what is the length of one segment? Is there a chance that two randomly selected segments overlap with each other?\n* Line 155: what is session information $X_{se}$? What are dimensions of $X_{st}$, $X_{be}$, $X_{se}$, $X_{si}$?\n* Line 161: can the author provide more details on what is adaptive average pooling?\n* Line 202: how the target value $\\mu$ was set?\n* Figure 4: how about precision, recall, and F1 scores (to be consistent with Tables 1 and 2)?\n* It would also be helpful to provide additional details on the architecture design and training process, e.g. hyper-parameters, training time, etc." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The method combines CEBRA and VICReg to incorporate surrounding information and learn enhanced representation with contrastive learning, which is a novel approach.\n* The paper is well motivated and tackles an important problem in neuroscience.\n* The writing is clear and logical." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduced NeurPIR, a self-supervised contrastive learning approach to learn intrinsic representation for each neuron from population dynamics. The method leveraged CEBRA and VICReg for representation learning, and was evaluated using synthetic and two mouse datasets, showing ability to learn representations indicative of neuronal intrinsic properties that are decodable by downstream classifiers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The usefulness of the out-of-domain evaluation on Steinmetz dataset is questionable. It seems as described on line 266, the self-supervised contrastive learning is performed on all neurons of all mice, including the test mice, i.e. the self-supervised model and classifier have to be retrained everytime new mice come in. Can any part of the model at least be reused during test time?\n* Model architecture is not clearly explained by figures or texts. Some details of the methods are missing (elaborated in Questions).\n* Some ablation studies are missing that would otherwise be helpful to understand the method in greater detail. For example, an ablation on which surrounding information included in the CEBRA framework has the most impact, or an ablation on different choices of contrastive learning methods besides VICReg might be helpful. \n* Results in tables and figures do not have errorbars. Adding sensitivity analyses would be helpful to quantify how significant the improvements of NeurPIR over the baselines are." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Time complexity of NeuPIR compared to other baselines?\n2. How many data samples are required to learn effective embedding?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The work is evaluated on three representative benchmarks, including one synthetic dataset, and two neural datasets. And it is compared against with two baselines and demonstrating SOTA performance in most of the tasks.\n2. The work utilized a novel, distinct and effective method to utilize contrastive learning strategy to learn the platonic representation, compared to NeuPRINT with learning time-invariant representations with a neuron-wise look-up table for dynamics forecasting during self-supervised learning, or LOLCAT with label-guided representation with end-to-end supervised learning. \n3. The out-of-domain evaluation on unseen mice is an important question, which increases the soundness of the evaluations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposed NeurPIR, which focus on learning a platonic representation from neural activities data to reflect the inherent properties and neuronal identity, relating to molecular information. The goal of this work is to learned representations robust to variations due to external stimuli and experiment conditions. It utilized the self-supervised multi-segment contrastive learning strategy from CEBRA, and learning representations for neurons with compare data from different segments, different behavior information, session information, and for neurons share similar functional roles to align closely in their representations. And they further aggregate the representation with adaptive average pooling to extract time-invariant representations. They further incorporate VICReg loss to enhance the prediction. The work is evaluated on three benchmarks: Izhikevich simulation model, spatial transcriptomics data with neural activities, neuron location with out-of-domain data, and compared with two existing baselines NeuPRINT and LOLCAT." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The core of the proposed approaches similar strategy as CEBRA to utilize the contrastive self-supervised approach for representation learning. The major difference from CEBRA is that it utilizes the adaptive average pooling to aggregated into time-invariant embedding, which might not necessarily guarantee the converge of the representation based on the data sampling. Further investigation could be done on how to guarantee converge based on different sampling strategy, or the requirement of amount of data to affect the predictive performance of the downstream tasks.\n2. The evaluation on Steinmetz dataset is evaluated on decoding brain region with the learned intrinsic representations is related to analyze the invariant properties of the neurons, while brain region is only coarsely reflecting information from neuronal level, would it be possible to evaluate on more fine-grained information such as spatial location of individual units?\n3. Ablation studies of VICReg loss should be performed.\n3. Sensitivity analysis (i.e. error bar) are not included in Fig 4, the effect of data shuffling, random initialization, etc. could be reported.\n4. Figure quality (i.e. font size, resolution) in the paper as limited, presentation and writing could be improved." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* The abstract says \"PRH posits that representations of different activity segments of the same neuron converge, while segments from inherently dissimilar neurons diverge\" (lines 19-21). What representations this is referring to is not clear in context.\n* The introduction should be more specific about what the method actually is, what the contrastive objective is, etc.\n* Identifying cell type seems like a paradigmatic task where compressing neural activity into binned firing rate loses important information which may be contained in the spike train.\n* The method section should provide more information about what CEBRA is.\n* The cross-animal generalization experiment (5.3) is interesting but as predicting cell type is the more relevant problem I would be interested to see a similar generalization experiment with a cell type dataset as in 5.2." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "* Understanding the role of cell type diversity is a major challenge for neuroscience. A contrastive approach like this one, which can in principle be trained without labels, is likely to be the way forward given the difficulty of obtaining cell type information and activity simultaneously.\n* Furthermore, learning representations which preserve cell property information is a application of obvious interest to the ICLR community.\n* This paper performs the right experimental evaluations for its method, starting with a synthetic dataset, then moving to a real dataset where ground truth cell type information is available, and finally a real dataset with only brain region information, and compares against appropriate competitor methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a contrastive learning method (NeuPIR) for analyzing single-neuron activity data, with the goal of obtaining a representation which preserves property-level similarity of neurons (e.g. cell type). The method combines a variational autoencoder (CEBRA) with a contrastive loss (VICreg). NeuPIR is applied to a synthetic dataset and real neural datasets with cell type and brain area information, and compared against a few other methods of feature extraction (PCA, UMAP, NeuPRINT, & LOLCAT)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The framing in terms of the \"Platonic Representation Hypothesis\" [1] feels like hype chasing. What is being presented here is just a contrastive learning method which is meant to identify similarity and differences in properties of neurons. This is not actually related to the PRH in a meaningful sense, which is about how representations _of the world_ converge across models in completely different domains. I am willing to raise my score if the framing of the paper (primarily title, abstract, introduction) is substantially revised to take this into account.\n* The details of the method are not clear to me. CEBRA [2], at least as proposed, takes in a window of activity across a population of neurons (among other covariates) and maps it to a single latent point. I believe that here CEBRA is being applied to the activity of single neurons and their covariates but this is an important distinction which is not made explicit in the text.\n* I am not convinced that other methods are being fairly compared to NeuPIR. There is a lack of detail about how hyperparameter selection occurred in the experiments which makes this difficult to evaluate. For instance, LOLCAT fails to label any neurons at all as Sst in Table 2 which suggests it wasn't tuned correctly for the task.\n* The method is not significantly original as it is combining the pre-existing CEBRA architecture [2] with the VIC contrastive loss [3], making this nearly a pure applications paper. This is not necessarily a flaw but does put the burden of innovation on the value of its scientific findings.\n\n[1] Huh, Minyoung, et al. \"The platonic representation hypothesis.\" arXiv preprint arXiv:2405.07987 (2024).\n[2] Schneider, Steffen, Jin Hwa Lee, and Mackenzie Weygandt Mathis. \"Learnable latent embeddings for joint behavioural and neural analysis.\" Nature 617.7960 (2023): 360-368.\n[3] Bardes, Adrien, Jean Ponce, and Yann LeCun. \"Vicreg: Variance-invariance-covariance regularization for self-supervised learning.\" arXiv preprint arXiv:2105.04906 (2021)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1) How exactly are the experiments on the Steinmetz dataset supporting the claims of neuron intrinsic representation learning?\n2) For the Bugeon Dataset: why was only data of mice A used?\n3) For the Steinmetz Dataset; why wan’t also e.g. a 10-fold crossvalidation used (folds along the mice identity)?\n4) Would you expect you method to also perform well if the number of Izhikevich neuron “types” was greatly increased (e.g. 10,20,30 categories)?\n5) Will/are the exact labels used for three experiments, and in particular for Bugeon be available for others to be able to reproduce the experiments exactly?\n6) Would you expect different results if max-pooling was used instead of mean-pooling?\n7) Is there any more literature / papers exactly doing single neuron characterization based on neuron population activity (possibly on other datasets)? It seems to be challenging to find more related literature." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper tackles a relevant, yet extremely challanging task: extracting individual neuron characteristics from neuron population data.\n- The paper convincingly demonstrates the proposed methods ability to do so to some extent, in particular compared to other available methods.\n- The related literature, methods and experiments section is very detailed and well written, aiding interpretability of the results and reproducibility." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proses a novel method called NeurPIR for extracting individual neuron representation based of neuron population recordings in a self-supervised fashion. For this purpose NeurPIR essentially combines a neural data specific sampling method, average pooled CEPRA embeddings, and a VICReg contrastive loss in a cohesive manner. The method is evaluated on a suite of synthetic and real neuron population activity recordings, where it superior performance to alternative methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* 1\\) The Steinmetz dataset experiment: likely doeant support the claim of being able to extract neuron internal represenations, as the method was trained to predict the location of the neurons, arguably an external neuron property?\n - Even if two exactly same neurons were integrated into two different brain regions they are likely identifiable through their activity as the differences in input to the two regions typically significantly deviates.\n - To claim exactraction of neuron intrinsic properties, the above effect would have to be significantly smaller than neuron intrinsic properly related differences, which is not validated.\n* 2\\) The Conclusion: is crucially missing a paragraph on the limitations of the proposed method and the experimental findings. The paper would significantly benefit from it.\n* 3\\) The Abstract: the first half is very confusing to read and doesn’t make it clear what the paper is actually about (for someone from a general computational neuroscience background). The paper would therefore greatly benefit from simpler and more concrete wording there.\n - Examples of terms that were not really helpful to me : “decoupling of intrinsic properties, “time-varying dynamics”, “dynamic activities”, “varying signals”, etc. What property, whos dynamics, what activity, which signal?\n - In particular “intrinsic properties” should be immediately followed by examples (later given), and the first mention of PRH feels out of place and its unclear how it relates to the sentences surrounding it. \n - Furthermore, in the context of computational neuroscience “what information is conveyed by neural activities” most often implies figuring out what neurons try to communicate to process information, which the paper is not about. \n - Finally, a statement like “NeurPIR captures the preset hyperparameters of each neuron” implies precise recovery e.g. $b = 0.2$. The proposed method was rather shown to capture rough categorical differences instead. More appropriate would could be e.g. “NeurPIR captures the class/category/type of each neuron”.\n* 4\\) The Figures: have barely readable label sizes and legends, or missing annotation.\n - Figures 2 and 3 feature barely readable label sizes and legend\n - Figure 1 could use more labels that help relate the model description to the images in the figure. (e.g. X, H, Z, P, F, CEBRA, VICReg)\n* 5\\) The Tables (or their descriptions): would benefit from some aggregate metrics across all categories.\n* 6\\) For Reproducibility: manual labeling like in the Bugeon is hard to reproduce, and its unclear to me from the text whether these will be / are provided." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024neuron,\ntitle={Neuron Platonic Intrinsic Representation From Dynamics Using Contrastive learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vFanHFE4Qv},\nnote={under review}\n}" }, "abstract": { "value": "Unlocking the secrets of neuronal activity—the information conveyed by neural dynamics—remains one of the grand challenges of neuroscience. \nThe Platonic Representation Hypothesis posits that behind different modalities of data (what we sense or detect), there exists a universal, modality-independent representation of reality. Inspired by this, we treat each neuron as a system, where we can detect the neuron’s multi-segment activity data under different peripheral conditions. We believe that, similar to the Platonic idea, there exists a time-invariant representation behind the different segments of the same neuron, which reflects the intrinsic properties of the neuron’s system. The optimization objective for obtaining the intrinsic representation of neurons should satisfy two criteria: (I) segments from the same neuron should have a higher similarity than segments from different neurons; (II) the representations should generalize well to out-of-domain data. To achieve this, we employ contrastive learning, treating different segments from the same neuron as positive pairs and segments from different neurons as negative pairs. During the implementation, we chose the VICReg, which uses only positive pairs for optimization but indirectly separates dissimilar samples via regularization terms. To validate the efficacy of our method, we first applied it to simulated neuron population dynamics data generated using the Izhikevich model. We successfully confirmed that our approach captures the intrinsic properties of each neuron as defined by preset hyperparameters. We then applied our method to two real-world neuron dynamics datasets, including spatial transcriptomics-derived neuron type annotations and the brain regions where each neuron is located. The learned representations from our model not only predict neuron type and location but also show robustness when tested on out-of-domain data (unseen animals). This demonstrates the potential of our approach in advancing the understanding of neuronal systems and offers valuable insights for future neuroscience research." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "representation learning", "biology", "neuroscience", "contrastive learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/afeea8d8d938b19217b4b84225d4867e442c9625.pdf" }, "presentation": null, "primary_area": { "value": "applications to neuroscience & cognitive science" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/fe6bd7efe951b9de4a5be2d8d06ddd310fd3d33c.zip" }, "title": { "value": "Neuron Platonic Intrinsic Representation From Dynamics Using Contrastive learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vFfVXSP24J
ECG Instruction Tuning on Multimodal LLMs for Report Generation: Benchmark and Evaluation
main
Active
ECG;Instruction Tuning;LLMs
datasets and benchmarks
3;5;5;6;6;8
5;4;4;4;3;3
2;3;2;3;3;4
2;2;3;3;3;3
3;3;2;3;3;3
5.5
3.833333
2.833333
2.666667
2.833333
-0.889297
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- My only question is about creating medical reports from the PTB-XL database. To my knowledge, this database does not include ECG reports, just tabular data. If these reports have been created using these labels, could the authors detail how the proposed method would differ from using a classification model that infers these labels and creating a medical report based on them in the same way that the GT report has been created for comparison?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- As discussed in the manuscript, the automatic generation of medical reports is a useful tool with severe real word applications.\n- The authors have conducted an extensive evaluation involving multiple LLM models.\n- The results show that MEIT is able to perform the task for which it has been designed with remarkable performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce a new approach for generating ECG reports from ECG signals. They present MEIT, a technique that fine-tunes the best-performing large language models (LLMs) through instruction tuning. This process uses representations from an ECG encoder, which is trained simultaneously with the fine-tuning of the LLM. Furthermore, due to the lack of relevant existing literature, the authors propose a benchmark using existing databases such as MIMIC or PTB-XL." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In my opinion, there is a lack of context as to how the proposed method differs from the different Instruction tuning methods for computer vision, mentioned in line 82. Although I understand that the objective is different (between explaining an image and generating a medical report) I consider that the optimization method is similar in both cases and I would like to know what makes MEIT a special optimization method compared to MiniGPT-4 or the others mentioned. \n\n- Linked to the above, the authors point out that, due to the particularities of ECG signals, these Computer Vision methods are not applicable. However, the authors propose an ECG encoder that is based on Convolutional Layers, which is a commonly used architecture in Computer Vision, so it is not very clear to me what makes these methods inapplicable in this particular context. IMO, if some of these computer vision methods can be applied to the proposed framework, they should serve as baselines for MEIT evaluation\n\n- Although the authors comprehensively evaluate different LLM models, they only use one ECG encoder. I believe that using other architectures such as Transformers or S4 models, which are also used to process cardiac signals, could improve the results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Why was the simple 1-D CNN layer architecture with average pooling selected for the ECG encoder? \n2. As mentioned in the weaknesses, what is the intuition behind the selected models for the robustness analysis and the evaluation of alignment with human expert annotations? Also, who specifically are \"human medical experts\" mentioned in section 5.2.4? \n3. What is the reason or intuition for lightweight attention-based concatenated-fusion outperforming LLaVa and Flamingo alignment methods?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The challenge of ECG Instruction Tuning compared to images for multimodal LLMs in the introduction and medical report generation, instruction tuning, and LLMs for ECG in the related works are well-written and organized.\n2. Many LLMs (~12 LLMs) ranging from GPT-2 models to more recent LLaMA-3-Instruct models are compared in the experiments section to assess the quality of report generation.\n3. The authors tackle an important area of research that has not been explored yet like the image-text multimodal domain." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes the MEIT framework, which is the first framework to enable ECG Instruction Tuning with LLM for the downstream task of ECG report generation. Within the framework, the authors also introduce the lightweight concatenated-fusion strategy to align the ECG and text modalities together. Finally, the authors also propose a benchmark that aims to assess the generated reports from the MEIT framework with various evaluation methods. The evaluation methods range from assessing the quality of the generated reports with conventional NLG metrics such as BLEU and METEOR to zero-shot generalizability, signal robustness, and comparison to human expert annotations using GPT-4o." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Of the three parts in the MEIT framework which are the ECG encoder, modality alignment, and LLM backbone, the ECG encoder only uses several 1-D CNN layers and average pooling. There are many ECG-specific architectures using SSL or other transformer-based architectures that can significantly improve the current ECG encoder. Some of these architectures should be compared and explored similar to how different methods were explored for the other two parts of the framework. \n2. For the Signal Perturbation Robustness task in the established benchmark, adding noise to ECG signals can potentially change the correct ground-truth report because ECG is very sensitive to noise, and there are many noise categories such as baseline wander and drift for ECG. Therefore, I am not sure if the robustness analysis is appropriate in this multimodal LLM ECG report generation setting.\n3. There are no specific reasons stated for conducting ablation experiments on a subset of the total number of LLMs used in report generation quality comparison. For example, BLOOM, OPT, LLaMA-1, Mistral were used for robustness analysis, LLaMA-2-Instruct and LLaMA-3-Instruct were utilized for evaluation of alignment with human expert annotations. However, there were no intuitive insights mentioned in the paper for these selections. \n4. There are many minor grammatical errors and typos in the paper. \nIn line 63 MELT should be MEIT\nLine 160 should be 800k \nLine 191 that require should be requires, and via directly injects should be revised in Line 194" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Can you present the alignment scores between ground truth reports and human-written reports, and analyze the results? If the ground truth reports show high alignment with human-written reports, why do we train the model instead of directly using these reports given that the reports already can be acquired from the automatic generation algorithms from ECG machines? If not, how can we trust the model trained in a supervised manner with these \"unaligned\" reports?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The paper is well written.\n* The authors have conducted an extensive set of experiments with various LLMs combined with ECG signals, which is quite impactful since combining LLMs with physiological signals is not yet explored enough in this field." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work presents a multi-modal framework to generate ECG reports from ECG signals by instruction tuning LLMs on paired ECG ECG and reports datasets. To add a signal modality to the LLMs, the authors utilize an ECG encoder composed of 1D convolutional layers, and manipulate the LLMs to fuse the ECG embeddings in a self-attention stage. The authors evaluate 12 different LLMs with 2 different datasets (MIMIC-IV-ECG and PTB-XL) on the report generation tasks using some conventional metrics (BLEU, ROUGE, etc) to compare the generated reports with the ground truth reports." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The motivation of this work is not convincing enough. Specifically, the authors do not explain why the report generation task in the ECG domain is needed and how the proposed task can be applied to medical practice, which is very important in medical field. The paired reports in MIMIC-IV-ECG or PTB-XL used in this work are mostly composed of keyword-based statements (e.g., \"Normal ECG\", \"Sinus Rhythm\", etc.), which is automatically generated from built-in algorithms from ECG machines. Therefore, if models are trained in a supervised manner using these reports as ground truths, they just do an approximation of the algorithms, which seems not meaningful at all to my understanding.\n* The authors repeatedly state their framework enables the LLMs to generate professional-grade reports, which seems an overclaim. Although the authors show the alignment scores between LLM-generated reports and human-written reports on the 500 samples from the PTB-XL dataset, they still should show alignment scores between the ground truth reports (which has been used to train the LLMs) and human-written reports as a baseline as well." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Why is it important to handle catastrophic forgetting of general knowledge and not focus solely on the task of ECG report generation if we are training the model? Models can be specific to tasks and that should not be a problem that needs to be addressed with additional overhead.\n\nECG text data is vague, do you mean SCP statements in particular? SCP statements are the textual data that provide the information and technical details of ECG signals, however some of the features noted in SCP statements are not merely language details and require reasoning and interpretation of the signal Ex: computing the axis deviation for a given signal and reporting it. Trusting an LLM on these values of SCP statements would be questionable without explaining and evidence into how these are generated." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-written and easy to understand. The authors maintain coherence and fluency throughout the paper with minimal language errors. The contribution of this work is of significant interest to the community but the novelty is incremental. The authors do a great job in my opinion in formulating the problem and the instruction for text data and a Comprehensive evaluation through showcasing their proficiency in quality report generation, zero-shot capabilities, resilience to signal perturbation, and alignment with human expert evaluation.\nAdditionally, the proposed alignment approach aids in addressing the catastrophic forgetting of general knowledge." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents the Multimodal ECG Instruction Tuning (MEIT) framework, designed to automate ECG report generation using large language models (LLMs) and multimodal instruction tuning. The main contribution is alignment of ECG signals with corresponding text descriptions to streamline report generation, across two popular ECG datasets (MIMIC-IV-ECG and PTB-XL). The authors present results using nine open-source LLMs. The results demonstrate the effectiveness of the proposed approach in generating quality report generation, zero-shot capabilities, resilience to signal perturbation, and alignment with human expert evaluation. The evaluation includes metrics like BLEU and ROUGE and aligns with human expert assessments making it a comprehensive evaluation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The contribution is marginal at best with primarily being conversion of ECG-text pairs into a chat-bot style instruction format for facilitation self-attention based learning between the ECG and text embeddings. Literature review can be better and more comprehensive. There is decent body of work on ECG diagnosis and report generation with LLM that has not been cited. Ex:\n1. Yu, Han, Peikun Guo, and Akane Sano. \"Zero-shot ECG diagnosis with large language models and retrieval-augmented generation.\" Machine Learning for Health (ML4H). PMLR, 2023.\n2. Yu, Han, Peikun Guo, and Akane Sano. \"ECG Semantic Integrator (ESI): A Foundation ECG Model Pretrained with LLM-Enhanced Cardiological Text.\" arXiv preprint arXiv:2405.19366 (2024).\nLine 194 is grammatically incorrect." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In the design of zero-shot experiments, the authors assume that training on MIMIC-IV and testing on PTBXL accounts for differences in population data and collection devices between databases. However, ECGs follow fixed patterns, and diagnostic criteria are globally standardized if these pattern changes are accurately captured. Moreover, there is no assurance that the training set from MIMIC-IV does not include abnormalities present in PTBXL. Given that PTBXL covers 71 types of abnormalities, many are likely represented in the MIMIC-IV training data. This questions the validity of the approach as true zero-shot learning." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The MEIT framework innovatively aligns the ECG signal processing and report generation process through an instruction tuning approach, enabling LLMs to generate diagnostic reports. This represents a rare and significant advancement in the domain of automated ECG diagnosis.\n\nThe authors conducted comprehensive validation using well-established metrics and provided an extensive analysis of the model's stability and generalization performance, clearly demonstrating the feasibility of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a novel approach leveraging large language models (LLM) to automate the generation of diagnostic reports from ECG signals. Unlike conventional methods that focus solely on classifying signal anomalies, the proposed MEIT method aligns ECG signals with text generation through instruction tuning. The performance of the model is validated across multiple aspects, including report generation quality, zero-shot capability, noise robustness, and expert comparison, using two extensive ECG databases." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The authors focus heavily on comparisons involving instruction tuning of the language model, raising concerns about whether the chosen ECG encoder is an effective architecture. It would be beneficial to evaluate the use of pre-trained state-of-the-art models as the ECG encoder to ensure robustness.\n\nIn Section 5.2.4, the authors mention a comparison with 500 ground-truth reports but should clarify the source of these reports. Are they confirmed to be manually annotated by physicians? Additionally, how many types of abnormal ECG events are represented within these 500 reports? For a dataset like PTBXL, which includes 71 abnormal categories, having only 500 reports would mean at most seven data points per category, which may not be sufficient for thorough validation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Have you noticed any patterns in the test data that could provide insights into the strengths and weaknesses regarding the accuracy of the generated reports? \n2. How could the generated reports help in cases where the diagnosis is not so clear or that require further investigation? Or what is the main advantage of generating ECG reports when there exist classification models for several heart conditions?\n3. Although it would be interesting to test this framework for other signals such as EEG, what further work should be carried out to improve the current results? And does the performance of this approach match the state of the art methods for image data reporting?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The paper is original since it focuses on medical report generation of ECG signals, for which methods are not as developed as for images\n2. The experiments were carefully designed, covering multiple LLM models and reporting insightful metrics for benchmarking and quality assessment. The results are thoroughly explained.\n3. The paper is well-structured and explained, with detailed explanations and figures\n4. This work can prove to be relevant in the medical field domain, and help guide future research on the topic" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a framework to automatically analyze ECG recordings and provide medical reporting using an LLM multimodal approach. Using two different sources of ECG signals and reports (MIMIC-IV-ECG and PTX-XL), the authors benchmark multiple open-source LLMs and compare them with several metrics, evaluating the accuracy, quality and fluency, as well as testing the models ability to perform zero-shot transferability, robustness to signal noise, and comparing the generated reports with the ones from experts." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Some directions for improvement:\n1. Although the results are detailed and commented on, the paper lacks some analysis of its limitations. It would be relevant to know for which cases the models cannot provide an accurate report so that improvements could be considered. Are there medical conditions that are underrepresented in the datasets? Or particular ECG biomarkers that the model has trouble with?\n2. Although the model uses Gaussian noise to evaluate the robustness to signal perturbations, real-world noise present in ECG acquisitions is often more complex (e.g., baseline wander, motion artifacts, muscle artifacts...). Testing with more realistic signal perturbations could help understand the applicability to clinical settings.\n3. The paper doesn't compare the performance of the model with other AI approaches that analyze and interpret ECG signals. Since the reports often consist of simple statements related to the rhythm and ECG waveforms, which are extensively covered in the literature by classification models, the real impact and advantage of the proposed framework is not fully assessed. How could the generated reports help in cases where the diagnosis is not so clear or that require further investigation?\n4. Future directions are scarce and could be more specific. By recognizing the limitations of the approach (which are not clearly stated), there could be some pointers to study the feasibility and application in clinical environment, interpretability, improvement of its generalization ability, and integration of other types of data." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce the Multi-Modal ECG Instruction Tuning MEIT framework and benchmark, the first attempt to tackle ECG report generation with LLMs and multimodal instructions." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024ecg,\ntitle={{ECG} Instruction Tuning on Multimodal {LLM}s for Report Generation: Benchmark and Evaluation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vFfVXSP24J},\nnote={under review}\n}" }, "abstract": { "value": "Electrocardiogram (ECG) is the primary non-invasive diagnostic tool for monitoring cardiac conditions and is crucial in assisting clinicians. Recent studies have concentrated on classifying cardiac conditions using ECG data but have overlooked ECG report generation, which is time-consuming and requires clinical expertise. To automate ECG report generation and ensure its versatility, we propose the Multimodal ECG Instruction Tuning (MEIT) framework, the first attempt to tackle ECG report generation with LLMs and multimodal instructions. To facilitate future research, we establish a benchmark to evaluate MEIT with various LLMs backbones across two large-scale ECG datasets. Our approach uniquely aligns the representations of the ECG signal and the report, and we conduct extensive experiments to benchmark MEIT with nine open-source LLMs using more than 800,000 ECG reports. MEIT's results underscore the superior performance of instruction-tuned LLMs, showcasing their proficiency in quality report generation, zero-shot capabilities, resilience to signal perturbation, and alignment with human expert evaluation. These findings emphasize the efficacy of our MEIT framework and its potential for real-world clinical application." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "ECG", "Instruction Tuning", "LLMs" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/8213c489a2fd58713b0698de0bbf9d5ad4338534.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "ECG Instruction Tuning on Multimodal LLMs for Report Generation: Benchmark and Evaluation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vFgmobsJiZ
Verbalized Machine Learning: Revisiting Machine Learning with Language Models
main
Active
Large Language Models
foundation or frontier models, including LLMs
3;3;5;5;6
4;4;4;4;5
1;2;2;2;4
2;2;3;2;4
2;3;3;4;4
4.4
4.2
2.2
2.6
3.2
0.666667
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My main question lies in the motivation behind proposing the VLM framework for classical machine learning tasks." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The idea of parameterizing a machine-learning model with natural language is niche. The new framework, Verbalized Machine Learning (VML), may be used to explain and distinguish between traditional machine learning and LLM-based learning schemas." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a framework called Verbalized Machine Learning (VML), which constrains the parameter space to human-interpretable natural language. In this framework, the input prompt of a large language model (LLM) is optimized within the discrete natural language space, and an optimizer LLM iteratively updates the parameters of the learner LLM. VML is more interpretable and adjustable than traditional machine learning as all components are characterized by human-understandable natural language." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-\tThe novelty and contribution seem marginal. It is not surprising that LLM-based VML can address classical machine learning problems. Additionally, the core of VML relies on using two LLMs in a role-playing manner as a learner and an optimizer, updating LLMs through prompt engineering, which does not yield significant scientific insights.\n-\tI find it unclear how the millions and billions of parameters of a language model are represented by natural language. While prompting is an effective way to optimize a model’s output, I do not agree that optimizing model parameters is effectively a prompt optimization problem (Line 161). For example, in Figure 2, the prompt serves more as an additional input to the model rather than representing model parameters. In other words, the prompt adds new contextual information to the input without necessarily changing the model itself.\n-\tThis framework only functions with LLMs that are adept at following instructions, which limits its applicability to smaller or non-instruction-based models.\n-\tThe framework has been tested on regression and classification tasks, which traditional machine learning models can handle quite well. It is unclear to me why it is worth applying this LLM-based VML framework, especially considering the potential costs and computational expense." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Interpretability is claimed as a benefit of VML framework, but I am uncertain about the actual benefit. For example, Line 515-516 \"validated by medical professional\". My confusion is: what does it take for a professional to inspect the \"model parameter\"? If the model parameter is like a document-long, the practitioner might be better off by not using the system right?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* illustration and mathematical formulation are very clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the author propose to frame conventional machine learning with LLM as Verbalized Machine Learning: casting the \"model parameters\" to be the input language to the modeling LLM; and \"parameter update\" as language feedback from an optimizer LLM. This allows major advantages of VML include (1) easy encoding of inductive bias; (2) automatic model class selection; and (3) interpretable learner updates." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* I do not understand the additional benefit (beyond iterative prompting methods e.g. [3] or retrieval-augmented generator model) provided by this general framework. For example,\n\n1. Using LLM to solve basic numerical computation has been shown to be hard by previous work [1]. Therefore, LLM doesn't seem to be a fit for more advanced calculation like regression. Also, the \"classical ML problems\" in the paper like regression models are very simple cases, but I do not believe such observation could be extrapolated to more advanced ML setting.\n\n2. Figure 6: couldn't a conventional decision tree achieve the same level of predictive power and interpretability? In VML framework, if the LLM is not good at instruction following anywhere during reading the \"model parameter\".\n\n3. The author claim VML brings the benefit of encoding human interpretable prior, but the examples given are a bit simple, like \"Prior: The input is X-ray image for identifying pneumonia.\" (Figure 9). And the benefit of such prior seems to be unstable (Figure 9a). Have the author tried to experiment with more complicated prior? or shows more stable benefit over \"without prior\"?\n\n* Such framework seems to make so strong assumptions --- optimizer LLM are essentially treated as an oracle machine; and the LLM needs to be very strong and faithful (to input) LLM. However, LLM has been shown to be brittle for explanation [2]. If such strong assumption is put on the backbone model, the same level of efforts (to get such a strong LLM) could be spent on alternative framework (a framework to unify various approaches from like a better decision tree, or other interpretable classifier) to achieve the same desiderate claimed by VML; and the \"conventional ML\" models from such alternative framework could run with much higher efficiency than serving an LLM.\n\n\n\n[1]: Faith and Fate: Limits of Transformers on Compositionality\n[2]: The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning\n[3]: Learning to Retrieve Iteratively for In-Context Learning" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "For the training dynamics plots across model scales, i.e. Fig 10 a), it'd be more clear if the authors can change the x-axis to be the FLOPs instead of step size." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper explores the potential of using LLMs in optimization and learning. The perspective and ideas are novel.\n- Overall the paper is well written and conveys the key ideas clearly with a good amount of details.\n- The experiments that support the claims are thorough and well designed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed a new framework of using LLMs as the backbones of machine learning tasks. The authors illustrated and verified the ideas using several simple ML tasks, and demonstrated that the approach is feasible. Furthermore, the authors also conducted ablations to show that the method is more effective than merely prompt optimization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- To make a strong argument of the generality of this new framework, the paper would need more results to show how practical it is, such as including empirical results measuring the efficiency of this approach. I'd also expect the paper to discuss the scalability of this approach, e.g. how it scales with amount of training data and size of the LLM being used. Given that this approach is not suitable for all or most machine learning tasks, the paper should carve out a concrete area of tasks which the proposed approach shines. \n- Since the major differentiator with prior work, e.g. Yang et al. is the function approximation view of LLM, the paper should provide some theoretical analysis of what functions could and could not be approximated, and which properties of the LLMs architecture pose constraints, etc." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weakness" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The paper studies a general and timely problem.\n\nThe main experiments cover multiple classical ML problems. The paper also presents some pilot results on image classification problems beyonds language tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Verbalized Machine Learning (VML), a framework that uses natural language (text prompts) as the parameter space for machine learning models. VLM represents model parameters as text prompts, and uses an optimizer LLM to improve the prompts of the learner LLM. The paper mainly tests the proposed on a series of classical ML problems such as linear regression, polynomial regression, and shows the proposed framework can enable learning textual prompts for these problems." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Overall, regarding technical contributions, the paper does not sufficiently differentiate the proposed technique from existing prompt optimization approaches and prior work using LLMs as optimizers. At the same time, the paper does not provide enough comparison against these baseline approaches. Specifically:\n\nThe proposed approach is essentially optimizing textual prompts for a learner LLM using an optimizer LLM (as acknowledged in line 160 of the paper). This has been explored in previous prompt optimization work (Pryzant, et al., 2023; Yang et al., 2023; Yuksekgonul et al., 2024; Zhang et al., 2024). The paper states the difference as \"prompt optimization seeks a generic instruction tuning without changing the original meaning.\" (line 233),\nI am not sure if I can buy this claim, modern prompt optimization techniques do have more specific edits in the prompts (e.g., see the examples in appendices of Yang et al., 23 and Zhang et al., 2024).\n\nMoreover, the paper lacks experimental comparisons against these related approaches. The evaluation mainly focuses on regression tasks. The only comparison with prior prompt optimization work is in Section 4.6, which provides a qualitative comparison against APE on text classification. (It's also worth noting that APE is primarily a search-based approach without LLM optimizers, making it a less relevant baseline). In order to clearly show the difference of VLM against prior approaches, it is necessary to compare to up-to-date prompt optimization approaches (Pryzant, et al., 2023; Yang et al., 2023; Yuksekgonul et al., 2024; Zhang et al., 2024) across a broader range of tasks under a controlled computational budget setting.\n\n\nLARGE LANGUAGE MODELS AS OPTIMIZERS (Yang et al., 23)\n\nAutomatic Prompt Optimization with “Gradient Descent” and Beam Search (Pryzant et al., 23)\n\nTextGrad: Automatic \"Differentiation\" via Text (Pryzant et al., 24)\n\nIn-Context Principle Learning from Mistakes (Zhang et al., 24)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please see weakness" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "* Good presentation\n* VLM is interesting and provides a new perspective on machine learning.\n* This paper investigates two important problems in machine learning: classification and regression." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Verbalized Machine Learning (VML), a framework that leverages large language models (LLMs) to address machine learning problems by constraining the parameter space to human-interpretable natural language. VML treats the LLM's text prompt as model parameters, enabling the model to be optimized over a discrete, sequential, and interpretable space. The framework offers advantages such as easy encoding of inductive bias, automatic model class selection, and interpretable learner updates. The paper empirically evaluates VML's effectiveness on various tasks, including regression, classification, and medical image classification, demonstrating its potential for enhancing interpretability and trustworthiness in machine learning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Lack of dicussion and comparison with [1]. [1] introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own in a data-centric way using symbolic optimizers.\n* I found that training loss is not stable in Figure 10. Is common in VML? \n\n\n\n\n\n------------------------------------\n[1] Symbolic Learning Enables Self-Evolving Agents. https://arxiv.org/abs/2406.18532" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024verbalized,\ntitle={Verbalized Machine Learning: Revisiting Machine Learning with Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vFgmobsJiZ},\nnote={under review}\n}" }, "abstract": { "value": "Motivated by the large progress made by large language models (LLMs), we introduce the framework of verbalized machine learning (VML). In contrast to conventional machine learning models that are typically optimized over a continuous parameter space, VML constrains the parameter space to be human-interpretable natural language. Such a constraint leads to a new perspective of function approximation, where an LLM with a text prompt can be viewed as a function parameterized by the text prompt. Guided by this perspective, we revisit classical machine learning problems, such as regression and classification, and find that these problems can be solved by an LLM-parameterized learner and optimizer. The major advantages of VML include (1) easy encoding of inductive bias: prior knowledge about the problem and hypothesis class can be encoded in natural language and fed into the LLM-parameterized learner; (2) automatic model class selection: the optimizer can automatically select a concrete model class based on data and verbalized prior knowledge, and it can update the model class during training; and (3) interpretable learner updates: the LLM-parameterized optimizer can provide explanations for why each learner update is performed. We conduct several studies to empirically evaluate the effectiveness of VML, and hope that VML can serve as a stepping stone to stronger interpretability and trustworthiness in ML." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/efe40ed070ad96b49161f87cd8596f6282fd2c72.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Verbalized Machine Learning: Revisiting Machine Learning with Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vG123yHVVl
Synthesizing Physical Backdoor Datasets: An Automated Framework Leveraging Deep Generative Models
main
Active
Backdoor Attacks;Physical Backdoor Attacks;Data Synthesis;Automated Framework
alignment, fairness, safety, privacy, and societal considerations
3;5;5;6
5;4;4;2
2;2;3;3
2;2;3;3
1;3;3;3
4.75
3.75
2.5
2.5
2.5
-0.894737
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "This paper plagiarized our work presented on arXiv , “Robust Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers” (v1 and v2). \nWe discovered the authors' plagiarism in December 2023. Their paper published on arXiv on Dec6, 2023 [1] is highly identical to the version 1 [3] and 2 [4] of our paper.\n\nSimilarities exist in almost all parts of the proposed method:\n\n1. **Framework** Structure: Both papers employ an identical framework comprising ‘trigger selection,’ ‘trigger generation,’ and ‘quality assessment and regeneration.’ The figure of framework in both manuscripts are remarkably alike.\n\n2. Use of LLM for **Trigger Selection**: In their initial version [1], they used LLaVA and GPT-4 for trigger selection, similar to our method. (Their ICLR submission only mentions LLaVA this time, maybe it's to create some differences with us.)\n\n3. Diffusion-based **Trigger Generation**: Our paper utilizes a diffusion-based text-guided image editing method for trigger insertion and mentions that this module's efficacy is expected to improve with advancements in image editing technology. Their paper also mentioned this approach and purpose.\n\n4. **Quality Assessment and Regeneration**: We introduced quality assessment of generated images, with a procedure for regenerating images that do not meet standards. Their paper employs an identical process.\n\nIn our papers, we emphasized that each module is designed with flexibility, allowing replacement by cutting-edge technologies. Substantially, their methods are exceedingly similar to ours and have many similarities in writing as well.\n\nIt is worth noting that the corresponding author and one co-author of this paper admitted to being the Area Chair and Reviewer for our submission to NeurIPS 2023, respectively. So, it's impossible that they haven't seen our paper.\n\nUpon uncovering these similarities, we engaged in nearly 50 days of communication with the authors of [1], both face-to-face and via email. They admitted that they began discussing and implementing our framework shortly after seeing our submission in the review stage. Finally, they withdrew their submission from CVPR 2024 and cited our paper in [2]. Because of the extensive similarity, they had to mention our work 15 times in [2].\n\nHowever, in their current ICLR Submission13749, they have removed all the discussions related to our work but retained all the plagiarized sections and didn't mention any \"inspiration\" from our paper.\n\nFor a clearer illustration, we have listed the related evidence in the attachment, including a timeline, some similar parts, the comparison of two framework figures, and content about our paper that are deleted in this submission version. You can also compare the 2nd version of our paper [4] with Submission13749 to find the similarity, and compare Submission13749 with their arXiv version [2] to find the removed parts.\n\n[1] Yang, S.J., La, C.D., Nguyen, Q.H., Bagdasaryan, E., Wong, K.S., Tran, A.T., Chan, C.S. and Doan, K.D., Synthesizing Physical Backdoor Datasets: An Automated Framework Leveraging Deep Generative Models. arXiv preprint arXiv: 2312.03419v1, 2023.\n\n[2] Yang, S.J., La, C.D., Nguyen, Q.H., Bagdasaryan, E., Wong, K.S., Tran, A.T., Chan, C.S. and Doan, K.D., Synthesizing Physical Backdoor Datasets: An Automated Framework Leveraging Deep Generative Models. arXiv preprint arXiv: 2312.03419v3, 2023.\n\n[3] Wang, R., Chen, H., Zhu, Z., Liu, L., Zhang, Y., Fan, Y. and Wu, B., Robust backdoor attack with visible, semantic, sample-specific, and compatible triggers. arXiv preprint arXiv:2306.00816v1, 2023.\n\n[4] Wang, R., Chen, H., Zhu, Z., Liu, L., Zhang, Y., Fan, Y. and Wu, B., Robust backdoor attack with visible, semantic, sample-specific, and compatible triggers. arXiv preprint arXiv:2306.00816v2, 2023.\n\nSupporting materials: https://drive.google.com/file/d/1JfMx36zTRc82KXswh4Ww1zltiRUWsajO/view?usp=sharing" }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": { "value": "Report on plagiarism" }, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Creating physical backdoor datasets is an interesting topic and can make contributions to related work.\n- Detailed discussion on improvements over existing techniques." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a framework that can synthesize physical backdoor datasets. The framework consists of three modules: a trigger suggestion module that recommends suitable physical objects as triggers, a trigger generation module that creates or edits images to contain these triggers using advanced generative models, and a poison selection module that filters for the most natural-looking results. The paper demonstrates that the framework can produce datasets that achieve high attack success rates in real-world scenarios while maintaining similar properties to manually collected physical backdoor attack datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper does not explain how the models in each module are trained. Also, the inputs and outputs of each step are not clear. As I understand it, in step 1 (trigger selection), the final output is a trigger. The model in step 2 then tries to attach the trigger to an image to generate the Trojan dataset. However, it seems that triggers need to be specified when training the model. So, how can this model be generalized to different triggers? Also, it is not clear what the training data is.\n\n- From the experimental results so far, it is difficult to evaluate the realism of the generated dataset. It might be helpful to provide some generated examples.\n\n- When specifying triggers, let's say “a car”. trigger generation may generate different cars based on its own understanding. Discussing how to ensure consistency of triggers and minimize the impact on relevant benign samples could be helpful." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Could you carefully justify the technical contribution of this framework?\nPlease discuss the customizability of the framework as pointed out in the weaknesses of the paper.\nPlease add more examples and comparisons of the generated poisoned images." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Overall, this is good work that makes physical backdoor attack research more accessible by providing a framework to generate datasets, which is usually the most tedious part.\n\nThe three-phase design makes good sense, and the results are considered comprehensive, as many aspects have been discussed, such as the common accuracy on clean inputs, attack success rate, as well as resilience, saliency heatmap, and dataset entropy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an automated framework for generating physical backdoor datasets using generative models to make physical backdoor attack more accesible. The framework has modules: \n\nTrigger Suggestion - Uses VQA models to suggest suitable physical triggers\nTrigger Generation - Create poisoned samples by generating or editing\nPoison Selection - Use ImageReward to select the best poisoned images" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My major concern is that the design of the framework appears to be a straightforward combination of a few existing solutions, i.e., a pretrained VQA model for trigger suggestion, stable diffusion or instruct diffusion for trigger generation/editing, and ImageReward for final poisoned data selection. Additionally, there is limited to no further customization or modification of these existing works to make them more integrated or collaborative. Therefore, the innovative contribution of this paper is very limited.\n\nIt also seems that we have limited control over the framework in terms of poisoned data generation. For some complicated datasets, it may be difficult to precisely control the size, type, or position of the trigger. This functionality is critical for certain tasks, given the diverse settings in the physical world. It would be beneficial to discuss this aspect in the paper and potentially incorporate it into the framework design.\n\nAs a physical backdoor dataset generator, it is more important to compare the quality of the generated images to real images, poisoned images from other related works (semantic trigger backdoor attacks have been around for quite some time), and edited images using traditional methods such as Photoshop. MORE evidence and examples need to be provided, especially in the appendix." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Are there any experiments conducted on physical devices? As far as I can see, the authors use the devices in Appendix B to only build the dataset. Is it possible to also apply the trained model (by the poisoned dataset) to a physical device for the classification task?\n\n- As the paper offers a toolkit for backdoor studies, do the authors consider open-source their code?\n\n- The authors aim to provide an \"effortless\" framework. Are there any results about time consumption?\n\n\n\ntypo: \"thsi\" (line 104)" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The built dataset contains physical data captured by various devices.\n- The framework automatically chooses the most suitable trigger, which is usually not considered by previous works.\n- It is easy to follow the presentation of the paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a framework to generate poisoned backdoor datasets, which consists of three components: 1) trigger suggestion, 2) trigger generation, and 3) poison selection. The motivation is to provide a more practical, generalized, and automated framework. The advantage is that this paper takes into account physical images captured from various devices." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Lower ASR even compared to clean label attacks, such as LC [1] and Narcissus [2]. The authors need to explain if there are any challenges behind the lower ASR.\n- Evaluated by very old defenses. The attack in this paper should also consider recent defenses, such as BTI-DBF [3] and IBD-PSC [4].\n- Only one dataset (5 classes is very small) and one small architecture. Considering larger datasets, such as a subset of ImageNet with 100 classes but fewer samples in each class. \n- According to Figure 1, the VQA model is a part of the \"trigger suggestion\" component, so it is not an individual contribution.\n- The motivation is not clear to me. For example, in section 3, the authors mention the previous method only works in multi-label settings, but the experiments in this paper are also conducted on a multi-label (5-class) dataset. It looks like this paper does not solve the problem raised. It would be better if the authors could clarify how their framework addresses the limitations of previous methods.\n\n[1] Label-Consistent Backdoor Attacks\n\n[2] Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information\n\n[3] Towards Reliable and Efficient Backdoor Trigger Inversion via Decoupling Benign Features.\n\n[4] IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "The major challenge of using benign features as triggers is that clean training datasets might already contain these features (such as books), leading to conflicts between the trigger pattern and benign features, potentially hindering backdoor learning. I am very interested to know how the authors have approached and tried to resolve this issue.\n\nCode implementation: I would like to see the code made open source\n\nIn conclusion, I think using natural objects as triggers is a good idea, but the challenges lie in addressing the potential conflict between benign features and the trigger, which could cause the backdoor training to fail (e.g., a significant drop in clean ACC)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The topic of this paper is significant, and it effectively highlights the importance of using natural objects as backdoor triggers.\n2. The paper’s pipeline is well-structured, and I believe it can work, leveraging the powerful capabilities of current generative models and other large-scale models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a framework for generating physical backdoor datasets using advances in generative modeling. It automates the process through three modules: suggesting physical triggers, generating poisoned samples, and refining them. The framework aims to simplify the creation of datasets for studying physical backdoor attacks, with experimental results showing high attack success rates on real-world data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.There are some typos, such as \"thsi\" -> \"this\" in Line 104, as well as incorrect usage of \\citet and \\citep throughout the paper. Additionally, I couldn't quickly grasp the intended meaning of Fig. 2, as the explanation is unclear and lacks a detailed diagram.\n\n\n2. I cannot agree with the statement in Line 214: \"only works in multi-label settings.\" In fact, the key ideas from Wenger et al. (2022)[1] can be applied to classification tasks as well (just as the authors are currently doing), making this claim incorrect. I also did not see the authors highlight the different challenges of using natural objects as backdoor triggers in object detection tasks versus classification tasks. Furthermore, Zhang et al. (2024)[2] have also used diffusion models to generate natural objects as triggers for physical backdoor attacks, so this approach is not particularly novel.\n\n3. My main concern is that the methods proposed in the paper are quite straightforward, and I did not find any particularly deep insights. Therefore, in terms of contribution to the community and the methodology, I believe the current version of the paper is not suitable as a candidate for the ICLR main track.\n\n\n[1] Wenger, Emily, et al. \"Finding naturally occurring physical backdoors in image datasets.\" NeurIPS 2022.\n\n[2] Zhang, Hangtao, et al. \"Detector collapse: Backdooring object detection to catastrophic overload or blindness.\" IJCAI 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024synthesizing,\ntitle={Synthesizing Physical Backdoor Datasets: An Automated Framework Leveraging Deep Generative Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vG123yHVVl},\nnote={under review}\n}" }, "abstract": { "value": "Backdoor attacks, representing an emerging threat to the integrity of deep neural networks, have garnered significant attention due to their ability to compromise deep learning systems clandestinely. \nWhile numerous backdoor attacks occur within the digital realm, their practical implementation in real-world prediction systems remains limited and vulnerable to disturbances in the physical world. \nConsequently, this limitation has given rise to the development of physical backdoor attacks, where trigger objects manifest as physical entities within the real world. \nHowever, creating the requisite dataset to train or evaluate a physical backdoor model is a daunting task, limiting the backdoor researchers and practitioners from studying such physical attack scenarios. This paper unleashes a framework that empowers backdoor researchers to effortlessly create a malicious, physical backdoor dataset based on advances in generative modeling. Particularly, this framework involves 3 automatic modules: suggesting the suitable physical triggers, generating the poisoned candidate samples (either by synthesizing new samples or editing existing clean samples), and finally refining for the most plausible ones. As such, it effectively mitigates the perceived complexity associated with creating a physical backdoor dataset, transforming it from a daunting task into an attainable objective. Extensive experiment results show that datasets created by our framework enable researchers to achieve an impressive attack success rate on real physical world data and exhibit similar properties compared to previous physical backdoor attack studies. This paper offers researchers a valuable toolkit for studies of physical backdoors, all within the confines of their laboratories." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Backdoor Attacks", "Physical Backdoor Attacks", "Data Synthesis", "Automated Framework" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e0cbcc3c7947f47da0cc29fba52e4fcad67c7299.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Synthesizing Physical Backdoor Datasets: An Automated Framework Leveraging Deep Generative Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vG9dVXwXQV
Pre-Trained Vision-Language Model Selection and Reuse for Downstream Tasks
main
Active
Vision-Langage Model; Model Selection; Model Reuse
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;6;6
4;4;5
1;3;3
2;3;3
2;3;3
5
4.333333
2.333333
2.666667
2.666667
0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the questions in the weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The proposed method is easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Selecting the best-performing pre-trained Vision-Language Models (VLMs) for a specific downstream task is challenging since no single VLM can achieve promising performance on all downstream tasks, and evaluating all available VLMs is impossible due to time and data limitations. \nTo address this problem, this paper proposes a novel paradigm to select and reuse VLM for downstream tasks, called Model Label Learning (MLL). \nThe proposal contains three key modules: model labeling, which assigns labels to each VLM to describe their specialty and utility; model selection, which matches the requirements of the target task with model labels; and model reuse, which applies selected VLMs to the target task in an ensemble manner. \nThe proposal is highly computationally efficient and growable since the model labeling process is completed target task independent and the ability could grow with the number of candidate VLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The novelty of the proposed method is weak. The main contribution of this paper is the model selection when ensembling multiple VLMs. However, there is no discussion or experimental analysis of the selected models during this process. What models are selected will give the readers a hint about the proposed method's characteristics or advantages.\n- The analysis in this paper is too simple. After the model selection, what models are selected? As the main contribution is the model selection, the authors should show the selected models to understand the proposed method's characteristics and advantages.\n- As a design choice analysis, the authors only tried K values to be 1 and 3. Although finding the best hyper-parameter is essential, why didn’t the authors try other values for K? The number of selecting models K is more important than the size of the model hub.\n- Other essential design choice analyses are also missing. For example, in Eqn 8, why did the authors give high loss weight to models with high entropy? Is it the best choice of the weight values? Also, in Eqn 7, how is the hyper-parameter alpha decided, and how does it affect the model's performance?\n- More importantly, the comparison with recent models is missing. There are several ways to improve VLMs without training, at least with the improved prompt-based approaches [1,2]. The authors should show the advantages of ensembling the models instead of the existing ways of improving VLMs. Also, ensembling models increases the number of total parameters. The authors should analyze the efficiency of the model ensemble compared to the existing approaches.\n\n[1] Visual Classification via Description from Large Language Models. ICLR 2023.\n\n[2] What does a platypus look like? Generating customized prompts for zero-shot image classification. ICCV 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weakness section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "**[New perspective]** This work focuses on the selection and reuse of pre-trained VLMs to better suit the need of specific downstream tasks, which is novel and practical.\n\n**[Good presentation]** This paper is well-written, making it easy to follow.\n\n**[Thorough evaluation]** Extensive experiments have been done to evaluate the effectiveness of the proposed strategy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Model Label Learning (MLL), a new approach for selecting and repurposing Vision-Language Models (VLMs) for downstream tasks. It comprises three main components: model labeling to categorize VLMs by their expertise, model selection to align VLMs with task requirements, and model reuse to integrate chosen VLMs into an ensemble for task application." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**[Need more explanation]** \n- In Figure 1, the details of the evaluated VLMs are missed. Please add this information in the caption for better understanding.\n- The paper missed the introduction of ImageNet Baseline (INB). Is the best-performing model on ImageNet, i.e., EVA02-E-14?\n\n**[Could be improved]** \n- In line 245, this work randomly selects images $X_v$ from sample datasets to serve as representations for each node. Is there a more elegant solution for this, e.g., using the mean of several samples from the same class?\n- For model reuse, the work selects top-k models with a simple ensemble approach. It would be nice to discuss or compare more advanced ensemble approaches in VLMs, e.g., “Beyond Sole Strength: Customized Ensembles for Generalized Vision-Language Models, ICML 2024”.\n\n**[Experiments]** \n- In Table 1, both INB and ModelGPT use the best-performing single model alone for evaluation. It would be nice to leverage them to select more models with ensemble for prediction when comparing the proposed method with 3-model ensemble. For example, the authors can select top-3 models on ImageNet for INB and do similar things for ModelGPT. Including this comparison can enhance the understanding of the effectiveness of the proposed method.\n- Since the proposed MLL introduces three procedures, each costing extra time, could the authors provide the additional time introduced? This could offer insights on the trade-off between performance and time." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In addition to the points listed in weakness, the VLMs in the current model hub are primarily designed for image classification tasks. Have the authors considered expanding the proposed pipeline to accommodate more complex tasks, such as segmentation?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The problem explored in this work is practical and meaningful. The proposed MLL framework provides an efficient way to select and reuse VLMs by leveraging a semantic graph and task-specific labels.\n\n2. The method demonstrates good scalability. The use of a semantic graph allows MLL to expand as new models or tasks are added, making it adaptable to diverse visual tasks.\n\n3. The paper is well-organized and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper explores a practical VLM reuse problem and proposes Model Label Learning (MLL), which efficiently selects and reuses pre-trained Vision-Language Models (VLMs) for downstream tasks. The framework consists of three modules: model labeling, which assigns labels to VLMs based on their capabilities; model selection, which matches these labels to task requirements; and model reuse, which employs an ensemble of selected models. In addition, a large-scale benchmark, including 49 VLMs and 17 datasets, is introduced to evaluate MLL’s effectiveness, with experimental results showing promising scalability and effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Regarding the scalability of the constructed semantic graph, if new nodes are added to the graph, is it necessary to add images to the sampled dataset to represent these new nodes? Additionally, have the authors considered using different datasets as the sampled dataset? If so, would different datasets impact the final performance?\n\n2. For each target dataset, the highest performance achieved by any model in the model hub should also be included as a baseline result. This would help evaluate the effectiveness of the proposed method in selecting models.\n\n3. As K is a core hyperparameter, more experiments analyzing its impact should be included, as the paper currently only presents results for K=1 and K=3. A more comprehensive analysis of K, including performance and computational cost at various values, is suggested." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024pretrained,\ntitle={Pre-Trained Vision-Language Model Selection and Reuse for Downstream Tasks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vG9dVXwXQV},\nnote={under review}\n}" }, "abstract": { "value": "Pre-trained Vision-Language Models (VLMs) are becoming increasingly popular across various visual tasks, and several open-sourced VLM variants have been released. However, selecting the best-performing pre-trained VLM for a specific downstream task is challenging since no single VLM can achieve promising performance on all downstream tasks, and evaluating all available VLMs is impossible due to time and data limitations. To address this problem, this paper proposes a novel paradigm to select and reuse VLM for downstream tasks, called Model Label Learning (MLL). The proposal contains three key modules: \\emph{model labeling}, which assigns labels to each VLM to describe their specialty and utility; \\emph{model selection}, which matches the requirements of the target task with model labels; and \\emph{model reuse}, which applies selected VLMs to the target task in an ensemble manner. The proposal is highly computationally efficient and growable since the model labeling process is completed target task independent and the ability could grow with the number of candidate VLMs. We also introduce a new benchmark for evaluating VLM selection methods, including 49 VLMs and 17 target task datasets. Experimental results clearly demonstrate the effectiveness of the proposed method for selecting and reusing VLMs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Vision-Langage Model; Model Selection; Model Reuse" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b165dc48a44bb4ec5000295d400dbc3a8b1b0c5d.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/aa38086d2bd499f27b392969f2582b5cff2a7cdc.zip" }, "title": { "value": "Pre-Trained Vision-Language Model Selection and Reuse for Downstream Tasks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vHO9mU87dc
ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference
main
Active
Long-Context LLM Inference;KV Cache Optimization
foundation or frontier models, including LLMs
3;5;5;8
4;3;3;3
2;3;2;3
3;3;3;3
1;3;3;3
5.25
3.25
2.5
3
2.5
-0.727607
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "As listed in the weakness part." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper is well-organized and clearly written. The proposed method is well-motivated, addressing relevant challenges, and is supported by thorough analysis. The evaluation is comprehensive and robust, effectively substantiating the claims and demonstrating thoughtful considerations. Overall, this is a strong submission for ICLR." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a high throughput LLM inference system for long sentence length. It proposes a compression method to leverage the low rank property of key cache and improve the sparse attention method by accurate KV selection." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. As shown in Fig. 8, some downstream tasks, such as 'Frequent Words Extraction,' perform significantly worse with sparse KV enabled. A brief analysis of why this approach underperforms for these types of tasks would be helpful, as well as any potential solutions to address these limitations.\n2. The proposed solution is currently evaluated on an 8B model with a 128K sequence length. It would strengthen the paper to include an analysis of whether this approach scales effectively for larger models, such as a 70B model with an extremely long sequence length of 1M." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My questions are listed in the weakness section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ This work presents a very interesting observation on the KV cache: pre-RoPE key cache is exceptionally low-rank compared to post-RoPE key cache, value cache and KV projection weights. Built on this observation, the proposed SHADOWKV significantly reduces the memory footprint of the Key cache.\n+ The work also improves the previous sparse attention work including QUEST by introducing outlier KV cache.\n+ This work also implements the inference system and shows actual throughput improvement on the real world A100 GPUs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The increasing KV-cache poses great challenges to long-context LLM inference. This work presents a long-context LLM inference system, SHADOWKV. It decreases the memory footprint by storing the low-rank key cache and offloading value cache to CPU, and reduces the decoding latency by reconstructing the sparse KV pairs on-the-fly. Evaluations show that SHADOWKV supports up to 6x larger batch sizes and improves throughput by up to 3X on A100 GPU while maintaining the accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The proposed method is complex, including low-rank K cache, CPU offloaded V cache, outlier KV cache, and dynamic sparse attention. However, the ablation study on each component is missing.\n - In terms of model accuracy: \n - it is unclear how much accuracy improvement an extra outlier KV cache will bring.\n - previous work Quest uses Min-Max as landmark cache, ShadowKV adopts Mean as landmark cache. It is unclear how much accuracy improvement this change will bring.\n - In terms of efficiency:\n - Authors only show a rough prefiling latency breakdown in Figure 1(c). It is unclear unclear how long it takes for computing the outlier cache (i.e., reduce, cosine-similarity, top-k, gather) in the profiling stage, how long it takes for KV cache chunk selection (i.e, MatMul, Softmax, Max, TopK) in the decoding stage, how long it takes for recomputing the K cache from low-rank cache. These overheads seem to increase linearly with the context length. It would be better to see the efficiency breakdown on every part of the system under different context lengths (e.g., 128K, 256K, 512K).\n - it is unclear how SHADOWKV performs for extremely long context. For example, the authors evaluated on Llama-3-8B-1M but only with up to 128K context length.\n - this work lacks efficiency comparison against previous work LoKi, Quest and MInference." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "* What exactly is the “sparse budget”?\n\n* How does SHADOWKV leverage the temporal locality of the KV cache?\n\n* What are the exact GPU memory savings of SHADOWKV? Including a quantitative discussion on GPU memory savings in the paper would be helpful.\n\n* How can SHADOWKV handle the KV cache for newly generated tokens, and what would be the impact of this?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* In-depth analysis on low-rank nature of key cache in comparison to value cache as well as weights.\n* Leveraging spatial locality of post-ROPE key cache for dynamic sparse attention appears both novel and effective.\n* Empirical results are impressive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces SHADOWKV, a CPU-offloading-based system for long-context LLM inference. SHADOWKV addresses GPU memory constraints by leveraging the low-rank property of the key cache, storing a low-rank pre-ROPE key cache on the GPU while offloading the value cache to the CPU. Additionally, SHADOWKV stores landmarks—mean values of post-ROPE keys for adjacent tokens—along with low-rank pre-ROPE key cache in GPU memory. During decoding, SHADOWKV utilizes these landmark keys to identify significant tokens, selectively recovering their keys and retrieving their values from the CPU. Evaluations on long-context benchmarks demonstrate that SHADOWKV maintains high accuracy while significantly reducing GPU memory consumption and inference latency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Incomplete Descriptions\n\nThe term \"sparse budget\" is not clearly defined in the paper, which may lead to confusion. Additionally, while SHADOWKV claims to leverage the temporal locality of the KV cache to reduce computation and communication (by approximately 60%), it lacks any detailed explanations on what that feature is.\n\n* Handling Newly Generated Tokens\n\nWhile the paper says that it excludes the handling of newly generated tokens for simplicity, this issue is quite significant and should not be ignored. If not addressed, the KV cache for newly generated tokens could negate SHADOWKV’s key benefits of reduced GPU memory usage and lower inference latency, especially with long output sequences. Incorporating mechanisms to handle these tokens within SHADOWKV is essential, and the authors should evaluate and report on its impact on accuracy.\n\n* Lack of Comparison with Infinigen\n\nThe paper does not sufficiently compare SHADOWKV with Infinigen [1], a closely related work that similarly stores low-rank key cache in GPU memory, offloads the value cache to the CPU, and selectively fetches important values based on approximate attention scores. Although the paper briefly discusses Infinigen, given the significant similarities, a more in-depth comparison with Infinigen should be made in order to highlight the main differentiator of SHADOWKV.\n\n[1] Lee et al., \"InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management\", OSDI'24" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper tackles a significant problem in LLM inference by reducing GPU memory usage through a hybrid CPU-GPU cache system, enabling long-context LLMs to operate more efficiently without sacrificing accuracy.\n\nThe system shows impressive throughput gains, handling up to six times larger batch sizes compared to baseline methods, which could substantially impact real-world LLM deployment.\n\nSHADOWKV is tested on multiple models and benchmarks (e.g., Llama, GLM) across various tasks, demonstrating consistent performance improvements across all settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces SHADOWKV, a novel system that improves inference throughput for long-context LLMs. The key challenge addressed is the increasing memory footprint of KV caches as sequence length increases, which slows down inference. SHADOWKV offers a solution by offloading the value cache to the CPU while keeping a low-rank key cache on the GPU, reducing memory consumption without sacrificing performance. By employing a method of KV selection for sparse attention, it boosts throughput and supports larger batch sizes, showing improvements of up to 3.04× throughput on the A100 GPU." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main issue is that, as far as I know, SVD is precision sensitive, but I didn't find any discussion about precision in the paper. My main question is what precision is used for ShadowKV and baselines. If you are using precision like FP16/FP32, my question is how does ShadowKV work on FP8/(FP8 & FP16) precision? If the precision is FP8, how does ShadowKV survive from precision-sensitive SVD?\n\nFor the speed evaluation, I only found the throughput (tokens/s), are there any experiments for the time of each operation separately?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "High-Throughput Long-Context LLM Inference System" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024shadowkv,\ntitle={Shadow{KV}: {KV} Cache in Shadows for High-Throughput Long-Context {LLM} Inference},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vHO9mU87dc},\nnote={under review}\n}" }, "abstract": { "value": "With the widespread deployment of long-context large language models (LLMs), there has been a growing demand for efficient support of high-throughput inference. However, as the key-value (KV) cache expands with the sequence length, the increasing memory footprint and the need to access it for each token generation both result in low throughput when serving long-context LLMs. While various dynamic sparse attention methods have been proposed to speed up inference while maintaining generation quality, they either fail to sufficiently reduce GPU memory consumption or introduce significant decoding latency by offloading the KV cache to the CPU. We present ShadowKV, a high-throughput long-context LLM inference system that stores the low-rank key cache and offloads the value cache to reduce the memory footprint for larger batch sizes and longer sequences. To minimize decoding latency, ShadowKV employs an accurate KV selection strategy that reconstructs minimal sparse KV pairs on-the-fly. By evaluating ShadowKV on a broad range of benchmarks, including RULER, LongBench, and Needle In A Haystack, and models like Llama-3.1-8B, Llama-3-8B-1M, GLM-4-9B-1M, Yi-9B-200K, Phi-3-Mini-128K, and Qwen2-7B-128K, we demonstrate that it can support up to 6$\\times$ larger batch sizes and boost throughput by up to 3.04$\\times$ on an A100 GPU without sacrificing accuracy, even surpassing the performance achievable with infinite batch size under the assumption of infinite GPU memory." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Long-Context LLM Inference", "KV Cache Optimization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/fb9f5ce9dad06f02a804746be94bf54852b8981e.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vI5cjHMzP4
Eligibility Traces for Confounding Robust Off-Policy Evaluation: A Causal Approach
main
Active
Causal Inference;Graphical Models
causal reasoning
3;5;6;8
4;3;3;4
2;3;3;3
2;2;3;4
3;2;3;4
5.5
3.5
2.75
2.75
3
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Refer to \"Weaknesses\"" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "As far as I can tell, the strengths of the paper include:\n\n* The development of the proposed causal TD and causal eligibility traces algorithms, which appear novel;\n* The proposed causal Bellman equations, which may provide practitioners with valuable insights into the value or Q-functions, especially in scenarios involving unmeasured confounders;\n* The main text of the paper is technically sound. I did not review the Appendix, but I did not spot any errors in the main text;\n* The writing is generally clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies causal reinforcement learning (RL), i.e., RL in the presence of unmeasured confounding. The author(s) introduces a causal temporal difference (TD) learning and a causal eligibility traces algorithm for off-policy evaluation in causal RL, which combine TD or eligibility traces with the partial identification bounds developed in the econometrics or causal inference literature. Theoretically, causal Bellman equations were introduced to bound the Q- or value functions. Empirically, the author(s) also conducted numerical experiments to investigate the finite sample performance of their proposed algorithm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper suffers from two potential limitations:\n\n* First, the numerical example is overly simplified. It only considers a 3 by 3 Windy Gridworld example. Additionally, the author(s) only reports the performance of their proposed algorithm, but did not compare their proposal against existing state-of-the-art methods. Given the huge literature on RL or/and off-policy evaluation (OPE) in the presence of unmeasured confounders, a comparison with these established methods would be highly beneficial. Such a comparison could highlight the most effective approaches for various applications. For instance, the methods developed by Kallus and Zhou (2020) and Bruns-Smith & Zhou (2023) seem directly relevant to addressing similar OPE challenges with unmeasured confounders. Additionally, POMDP-based methods, which use the POMDP framework to model the unmeasured confounding problem—such as those by Tennenholtz et al. (2020), Nair and Jiang (2021), and Shi et al. (2022) — would also be pertinent to this setting.\n\n* Second, there is a lack of adequate discussions of the related literature. The last paragraph on Page 1 discusses the difference between this paper and other related works that use partial identification bounds. In particular, there is a line of work that \"requires to additional parametric assumptions about the system dynamics\". However, it would be better to detail this point later in the main text. What are the additional assumptions these papers imposed? How your proposal avoid imposing these assumptions? For instance, the paper by Namkoong (2020) needs a single-time unmeasured confounding assumption (I do not think this is a \"parametric\" assumption), which could be explicitly mentioned. Additionally, the paper by Bruns-Smith & Zhou (2023) also developed robust Bellman-operators using partial identification bounds. It would be better to clarify the difference between your proposal and theirs in detail. Moreover, the DGP mentioned on Page 2 suggests that you also relies on additional assumptions, more specifically, the memoryless unmeasured confounding assumption. It would be better to mention other related works that also rely on this assumption and discuss how to potentially verify this assumption in practice. Finally, as I mentioned, in addition to the use of partial identification methods, there are other methodologies, e.g., the POMDP-type methods to handle unmeasured confounding. These works are relevant as well." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "## Major comments:\n1. The paper states that the proposed method relies on weaker assumptions than exisiting methods. In particular, the paper mentions that existing partial identification methods for off-policy evaluation rely on strong assumptions, including parametric assumptions about the system dynamics, model-based algorithm, and finite horizon. However, the settings consider in this paper actually rely on strong assumptions, including Markovness, finite action and state spaces, and bounded rewards. It would be great if the authors can review and compare other methods that consider the same settings, provided there are any.\n\n2. The experiments are conducted in the simple synthetic Windy Gridworld environment. It would be helpful if the authors can comment on real-world scenarios to which the proposed methods are applicable, such as healthcare or robotics or something else. Experiments on real-world examples and comparisons with competing methods will further strengthen the paper.\n\n\n\n## Minor comments:\n\n1. Line 97 on page 2: \"represents\" --> \"as\"\n2. Line 101 on page 2: Better to mention the full name before using the abbreviation \"SCM\"\n3. Line 104 on page 2: Better to explicitly mention that $PA_V$ is the set of parents.\n4. Line 361: \"we\" --> \"We\"" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is original in effectively integrating standard off-policy methods into bounding the value function in causal reinforcement learning, and the proposed algorithms seem straightforward to implement. It creatively addresses the causal inconsistency assumption that is present in many real-world applications. The problem formulation is clear. The properties of the proposed algorithms are backed with theoretical guarantee." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies off-policy evaluation in reinforcement learning when unobserved confounding exists in the data such that the causal consistency assumption is violated. Under this scenario, the paper derives causal Bellman equations to bound the value function and Q-function under a target policy. Two algorithms using eligibility traces are proposed to estimate the bounds of the value an Q-functions in both online and offline settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The environment considered in the paper is a little too restrictive, with finite action and state spaces, and bounded rewards. \n\n2. The synthetic experiments are conducted in simple Windy Gridworld settings with a small action and state space." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How does the proposed algorithms perform on large-scale RL experiments? A direct difficulty of scaling up the algorithm is the need of solving $\\min_{s\\in\\mathcal{S}}V(s)$ and $\\max_{s\\in\\mathcal{S}}V(s)$ for some value estimate $V$.\n2. Despite that the true partial identification interval (defined through Theorem 1) gives valid upper and lower bounds of the true policy value, it seems that the resulting bound could be too optimistic or too pessimistic since the observational data can be induced by arbitrarily bad behavior policy (and sure this makes sense). Is it possible to provide further analysis how does the behavior policy influence the accuracy of the true partial identification interval? Or are there methods to avoid such kind of potential looseness under some circumstances?\n3. In addition to the asymptotic convergence results (Theorem 3 and 4), how does the behavior policy influence the convergence?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The problem of confounding in OPE studied by this paper is well motivated and is an important topic towards reliable and robust RL.\n2. The model-free approach by leveraging temporal difference learning and eligibility traces for partial identification in OPE is new and interesting.\n3. Theoretical results prove the convergence of the proposed algorithms to the partial identification interval given exact observational distributions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the challenges of off-policy evaluation in reinforcement learning (RL) when faced with confounded offline data and non-overlapping support between behavior and target policies. In such cases, traditional methods struggle to produce accurate value estimates due to unobserved confounders and the lack of common support, resulting in biased evaluations. The authors propose a novel model-free approach leveraging eligibility traces for partial identification of policy values that gives upper and lower bounds of the underlying true expected returns." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The experiments of the proposed methods are limited to simple synthesis setups. \n2. Lacking empirical comparisons with extensive body of partial identification OPE methods as mentioned in the related work section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "There is limited work on causality and handling unobserved confounders in RL, but this is an important problem, with possible practical applications. This work takes a step towards deepening our understanding of how we should think about RL in these settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies off-policy evaluation with offline data in Markov Decision Processes, where the actions taken by the behavior policy may be affected by unobserved confounders, causing standard estimation techniques to fail. The authors propose a variant of the Bellman equation that takes this confounding into account, and show they are able to obtain a consistent estimate of it from data. They demonstrate their approach on a gridworld experiment." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The theoretical results, while a good first step, need further refinement to be convincing. In particular, the following aspects could be improved:\n\t* Theorem 1 and 2 give upper and lower bounds on the value and Q-value function in the confounded setting, but it is unclear how tight these bounds are. Can we obtain tighter upper and lower bounds, or is this the tightest possible? Is it possible to come up with a clean bound on the gap between the upper and lower bounds? Without answers to these questions, it is difficult to see how significant Theorem 1 and 2 are.\n\t* Theorem 3 and 4 are asymptotic consistency results. What are the finite-time properties of Algorithm 1? That is, for a fixed number of samples $n$, how small is the estimation error on the value function? While an asymptotic consistency result is nice, more refined analysis of this is required in order to show how practical this approach is.\n\n2. The experimental results are limited to an extremely simple 3x3 grid world environment. Given the aforementioned shortcomings of the theoretical results, these experiments are not sufficient for illustrating the effectiveness of the proposed approach. More extensive experiments on more complex environments are necessary here with the current theoretical results.\n\n3. Several notational issues. In particular, the $\\langle$, $\\rangle$ notation in Theorem 1 and 2 is not defined. I believe this is attempting to simultaneously state the upper and lower bounds, but unless I missed it, this was not stated. This should be clarified. It was also unclear and somewhat distracting why in Theorem 1 and Theorem 2 some of the font is blue.\n\n4. More practical justification for why this problem is important would help better motivate the paper.\n\n5. There are a variety of existing works on causality in bandits and RL that are not mentioned or cited here. See works [1]-[5] given below. These should be cited, and some discussion given of relation to the current work.\n\n[1] Lattimore, Finnian, Tor Lattimore, and Mark D. Reid. \"Causal bandits: Learning good interventions via causal inference.\" Advances in neural information processing systems 29 (2016).\n\n[2] Lee, Sanghack, and Elias Bareinboim. \"Structural causal bandits: Where to intervene?.\" Advances in neural information processing systems 31 (2018).\n\n[3] Lu, Yangyi, Amirhossein Meisami, and Ambuj Tewari. \"Causal bandits with unknown graph structure.\" Advances in Neural Information Processing Systems 34 (2021): 24817-24828.\n\n[4] Lu, Chaochao, Bernhard Schölkopf, and José Miguel Hernández-Lobato. \"Deconfounding reinforcement learning in observational settings.\" arXiv preprint arXiv:1812.10576 (2018).\n\n[5] Wang, Lingxiao, Zhuoran Yang, and Zhaoran Wang. \"Provably efficient causal reinforcement learning with confounded observational data.\" Advances in Neural Information Processing Systems 34 (2021): 21164-21175." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper proposes two novel algorithms using eligibility traces that correctly bound value functions of a target policies from confounded observational data generated by a different behavior policy." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024eligibility,\ntitle={Eligibility Traces for Confounding Robust Off-Policy Evaluation: A Causal Approach},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vI5cjHMzP4},\nnote={under review}\n}" }, "abstract": { "value": "A unifying theme in Artificial Intelligence is learning an effective policy to control an agent in an unknown environment in order to optimize a certain performance measure. Off-policy methods can significantly improve the sample efficiency during training since they allow an agent to learn from observed trajectories generated by different behavior policies, without directly deploying the target policies in the underlying environment. This paper studies off-policy evaluation from biased offline data where (1) unobserved confounding bias cannot be ruled out a priori; or (2) the observed trajectories do not overlap with intended behaviors of the learner, i.e., the target and behavior policies do not share a common support. Specifically, we first extend the Bellman's equation to derive effective closed-form bounds over value functions from the observational distribution contaminated with unobserved confounding and no-overlap. Second, we propose two novel algorithms that use eligibility traces to estimate these bounds from finite observational data. Compared to other partial identification methods for off-policy evaluation in sequential environments, these methods are model-free and do not rely on additional parametric knowledge about the system dynamics in the underlying environment." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Causal Inference", "Graphical Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c01b12f22341b0387f79ec7816913db1025f4b64.pdf" }, "presentation": null, "primary_area": { "value": "causal reasoning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/46a99af354c935424ee386c40108144f01a33bc5.zip" }, "title": { "value": "Eligibility Traces for Confounding Robust Off-Policy Evaluation: A Causal Approach" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vIHmkF5rnC
Lower-level Duality Based Penalty Methods for Hyperparameter Optimization
main
Active
Bilevel Optimization;Hyperparameter Optimization;Nonsmooth Optimization
optimization
3;3;5;6
4;5;5;3
3;2;2;3
1;1;2;3
3;2;3;2
4.25
4.25
2.5
1.75
2.5
-0.522233
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "see weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is easy to follow, the tackled problem is hard, and the idea is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper tackles the problem of solving bi-level optimization problems with composite non-smooth inner optimization problems, more specifically with inner problems of the form $\\min_x l(x) + \\sum_i^r \\lamdba_i R_i(x)$ with $R_i$ defines as norms." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Lack of literature review: how does the paper compare to [1]?\n- Assumption 3.1 is **very very** strong, and to the best of my knowledge is not true for the $\\ell_1 + \\ell_2$- norm for instance.\n- Following the later question, how do you compute it in the sparse-group Lasso experiments?\n- Algorithm 2 has **a lot** of hyperparameters, how do you select\n$\\lambda^0, \\rho^0, r^0, \\beta, \\gamma, t$?\n\nExperiments:\n- what is the number of inner optimization steps to solve the problems in lines 3 and 4 in Algorithm 2?\n- overall, in my experience, it is hard to gain insight from this kind of experiment: each method is so sensitive to hyperparameter tuning. Except huge grid-search to select the hyperparameters, I do not see how to properly compare all the methods (this is a recurrent problem in bilevel optimization)\n- overall the provided experiments are very limited, experiments on real data are not provided in the main text. The data are not even described in the main text: this brings confusion to the reader.\n- In addition, it seems to me the real data has not been used correctly: in Appendix D. 2 it is written ¨The datasets we selected are Gisette (Guyon et al. (2004)) and sensit (Duarte & Hu (2004)). Following the data participation rule as Gao et al. (2022), we randomly extracted 50, 25 examples as training set 50, 25 examples as validation set, respectively; and the remaining for testing¨, what does this mean exactly? Gisette on libsvm has a dedicated separated test set.\n\n\n\n[1] Mehmood, Sheheryar, and Peter Ochs. \"Differentiating the value function by using convex duality.\" International Conference on Artificial Intelligence and Statistics" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.In equation (5), why could the max operator be dropped to obtain the reformulation (6) for (2)? Is it an equivalent reformulation, or just done by intuition? In fact, by Fenchel’s inequality, the inequality constraint of (6) becomes equality constraint. Explain it more clearly.\n\n2.The constraint of (6) is equivalent to (7) when $\\|\\rho_i\\|_*\\le\\lambda_i$, so why exclude the case of $\\|\\rho_i\\|_*>\\lambda_i$? Explain it more specifically.\n\nMinor:\n1.In equation (4) and (5), $\\mathcal{R}^*_i\\left(-\\frac{\\rho_i}{\\lambda_i}\\right)$ should be $\\mathcal{R}^*_i\\left(\\frac{\\rho_i}{\\lambda_i}\\right)$.\n\n2.In line 184, 2 should be (2). Similar typos arise in line 770.\n\n3.In the proof of Lemma 2.6, RHS of the (b) inequality: the sign “-” before “$\\min_{\\rho}$” should be “+”; RHS of the (c) equality: last “+” should be “-”." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.The authors provide a novel penalty method based on lower-level duality, avoiding any implicit value functions and high-complexity subproblems.\n\n2.Two fully first-order algorithms based on proximal techniques and the alternating direction method of multipliers are proposed with theoretical proof of convergence and promising numerical experiments.\n\n3.The proposed algorithms does not rely on any open-source libraries or commercial optimization solvers." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses hyperparameter optimization (HO) in the context of nonsmooth regularizers by proposing a novel penalty method based on lower-level duality (LDPM), which avoids any implicit value functions and high-complexity subproblems. Under certain conditions, the penalized problem is illustrated to be closely approximates the optimal solutions of the original HO. The authors introduce two fully first-order algorithms to solve the penalized problems and provide theoretical proof of their convergence. Numerical experiments demonstrate the efficiency and superiority of LDPM." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.The strong convexity of $l(x)$ may hinder LDPM’s application to more general problems.\n\n2.The convergence results for the cases of multiple regularization terms, as detailed in Appendix B.5-Theorem B.2, do not specify the requisite number of iterations as outlined in Theorem 3.11." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "see above" }, "flag_for_ethics_review": { "value": [ "Yes, Research integrity issues (e.g., plagiarism, dual submission)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I am not sure that given the concern above there is a chance I will consider changing my rating. But the least the authors could try to do is putting their work into the larger context of why these algorithms are needed, for instance when the MM one presented in their previous work does not apply or pefrorms very poorly." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The reformulation of the bilevel problem into a penalised one whose solutions are very close to the ones of the original one is very interesting a surely the main point of the whole paper. Numerical validations are convincing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this work the authors present two algorithms for solving bilevel optimization problems used to model the problem of hyperparameter estimation in several learning applications. The approach is based on the reformulation of the bilevel problem as a penalised problem, which is done by exploiting strong duality and conjugation techniques. The structure of the penalised problem is then used to design a proximal gradient algorithm (since the resulting constraints are prox-explicit) and an ADMM-type algorithm. Several numerical results are showed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The key idea described above (which is good) has been presented alredy in at least a large paper written presumably by the same authors and with a significant amount of overlapping content with respect to this one.\n* He Chen, Haochen Xu, Rujun Jiang, Anthony Man-Cho So, \"Lower-level Duality Based Reformulation and Majorization Minimization Algorithm for Hyperparameter Optimization\", Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:784-792, 2024.\n\nI checked carefully and despite the attempts of the authors to slightly change notations and, in some respect, the whole paper presentation, some paragraphs are almost repeated verbatim. This is the case for instance of the main and most interesting part of the papers covering the reformulation of the bilevel optimisation problem, which, as mentioned above, are important and interesting contribution, but, unfortunately, covered and published elsewhere.\nThe rest of the paper is just the presentation of two algorithms different from the MM (LDMMA) one presented in the paper above. Similar numerical tests are performed (on elastic net and group lasso). The one on Group Lasso shows significant numerical improvements, while the other one shows only better test errors in comparisons with LDMMA which enjoys however better validation errors." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Prior work, such as [1,2], tackles bilevel problems with non-smooth lower-levels by introducing smooth lower-level algorithms. How does your method compare to these?\n- There are sign inconsistencies from (4) to (5), and it seems incorrect to move the max over $\\rho$ (which becomes min) from the inequality in (6) and merge it with the min over $x$ and $\\lambda$. It is then correct once the penalized objective is introduced. Do you concur?\n- Before Assumption 2.1, it is stated that “the validity of (6) depends on the following assumption.” This is debatable, as (6) should hold even without an explicit closed form. Could you clarify?\n\n[1] P. Ochs, R. Ranftl, T. Brox, and T. Pock. Techniques for gradient-based bilevel optimization with non-smooth lower level problems. Journal of Mathematical Imaging and Vision. 2016\n\n[2] J. Frecon, S. Salzo, and M. Pontil. Bilevel learning of the group lasso structure. NeurIPS 2018." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The work addresses a notable trend in bilevel optimization: converting bilevel problems into single-level formulations. The approach is particularly valuable for the challenging case where the lower-level problem lacks smoothness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a penalty-based framework to reformulate certain bilevel optimization problems into single-level optimization tasks. The approach is supported by equivalence and convergence guarantees, providing a new perspective on handling bilevel structures." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the methodology is intriguing, the added value of this contribution is not entirely clear. For instance, consider the case of a single hyperparameter $\\lambda$ in a bilevel problem, such as (2), where cross-validation is required. The proposed reformulation introduces multiple variables encapsulated in $r$ as well as an additional hyperparameter $\\beta$, which also requires cross-validation. This merely shifts the bilevel challenge from $\\lambda$ to $\\beta$. Moreover, for problems like sparse group lasso and elastic net, understanding the impact of $\\lambda$ on $\\lambda \\mapsto x_\\lambda$ is arguably more intuitive than analyzing $\\beta$ on $r_\\beta$.\n\nThe experimental section lacks motivation regarding the selection of baseline algorithms, and more detailed analysis would enhance clarity. For instance, reporting hyperparameter estimates and solution support would help. Additionally, the figures need improvement for readability.\n\nFinally, certain imprecisions should be addressed to prevent misconceptions for unfamiliar readers (see examples in “Questions”)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a first-order algorithm based on penalty methods for bilevel hyperparameter selection problems." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024lowerlevel,\ntitle={Lower-level Duality Based Penalty Methods for Hyperparameter Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vIHmkF5rnC},\nnote={under review}\n}" }, "abstract": { "value": "Hyperparameter optimization (HO) is essential in machine learning and can be structured as a bilevel optimization. However, many existing algorithms designed for addressing nonsmooth lower-level problems involve solving sequential subproblems with high complexity. To tackle this challenge, we introduce penalty methods for solving HO based on strong duality between the lower level problem and its dual. We illustrate that the penalized problem closely approximates the optimal solutions of the original HO under certain conditions. In many real applications, the penalized problem is a weakly-convex objective with proximal-friendly constraints. Furthermore, we develop two fully first-order algorithms to solve the penalized problems. Theoretically, we prove the convergence of the proposed algorithms. We demonstrate the efficiency and superiority of our method across numerical experiments." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Bilevel Optimization", "Hyperparameter Optimization", "Nonsmooth Optimization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/11f1920e36ade7f5bea5ecb64f401f0c83db6bfb.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/015e79d910ad0732f95726d4102dd892e9a3bfa2.pdf" }, "title": { "value": "Lower-level Duality Based Penalty Methods for Hyperparameter Optimization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vJ0axKTh7t
The Labyrinth of Links: Navigating the Associative Maze of Multi-modal LLMs
main
Active
Multi-modal LLM;Visual Reasoning;Association
datasets and benchmarks
3;5;6;6
4;5;3;5
2;2;3;3
2;3;3;3
1;2;3;2
5
4.25
2.5
2.75
2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. I don't see a clear argument on why sticking to zero-shot setting? As there is a practice memory involved, is it also naturally similar to few-shot?\n2. Why does the study only focus on MLLMs? Is it because the object concept learning dataset is for multi-modal initially? How hard is it to make a text-only corresponding benchmark?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "**originality**\nThe originality is good. The work proposed and will open-source a new benchmark for MLLMs, by transforming a previous ML benchmark into a LLM benchmark.\n\n**significance**\nIt shows all MLLMs have an obvious gap vs humans. In this sense, the benchmark is able to evaluate and push an overlooked capability towards AGI." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes a new benchmark testing the zero-shot association ability of MLLMs. Association origins from object concept learning, where the task is to connect observations with previous practice memory by identifying the underlying principle. For example, images of fresh apples, oranges, vegetables could be connected through the adjective \"fresh\". It is a fundamental capability for humans. The proposed benchmark leverage previous datasets of object concept learning and is created in an annotation-free way. Basically, the labels in datasets of object concept learning directly provide the underlying principle (concept) that could connect objects.\n\nThe authors designed different settings to test MLLMs' zero-shot association ability: single-step vs multi-step, synchronous vs asynchronous. According to the reported results, all the leading MLLMs show a gap in terms of the association ability, compared to humans." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**quality**\nThere are two aspects that improvements could be made on. First, regarding error analysis, more insights are preferred. For example, by \"limited perception capability\", is it due to the limitation of image encoder/resolution or something intrinsic to LLMs. Most public MLLMs are composed of a separate image encoder+adaptor and the main LLMs. Some ablation studies on this aspect are preferred. Second, checking the correlation between this new benchmark and existing benchmarks is preferred. For example, if the performance on this new benchmark is strongly correlated with a weighted sum of those on some existing benchmarks, we could better know how to improve such a capability. If no correlation, this work might point out an overlooked dimension, which could also motivate more related benchmarks to be created.\n\n**clarity**\nThe presentation could be improved a bit, by adding more clear examples. For example, Figure 7 in the context is better in explaining the exact task than Figure 1. And adding figures for the annotation-free transformation from object concept learning datasets can better explain the exact dataset building process." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Could the authors explain the input and expected output of the single-step \"deduction\" task? Since the benchmark is claimed to evaluate MLLMs' ability in association prediction, whether deduction is part of the benchmark? In addition, could the authors explain the relation between the \"association\" and \"deduction\" tasks?\n\n2. For the results in Figure 3, since the human expert could achieve hundreds rounds of association, could the authors explain how the length of association is evaluated? Specifically, is the benchmark contains the ground truth of hundreds rounds of associations? Is each prediction on the next association necessarily requires all the previous memories? Are those MLLMs prompted with such long-term association or with only current inputs?\n\n3. What would be the prompting format for such tasks. Are those images concatenated as a single image or separately input into the prompt? Are those images interleaved with proper textual instructions, so that MLLMs could understand the intention correctly? In addition, is the proposed tasks formatted as multi-choice question-answering tasks, where generated answers could be well mapped to ground-truth?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper proposed a novel task and a perspective on MLLMs.\n\n2. Various MLLMs are tested on the benchmark." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed an evaluation benchmark, which evaluates multimodal large language models' performance on predicting association in three scenarios: single-step association, synchronized multi-step associations, and asynchronized multi-step associations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The definition of the task \"deduction\" in Table 1 could be better explained and illustrated like the \"association\" task in Figure. 2.\n\n2. It would be essential to include human performance in Table 1 to understand current gap between MLLMs and human. In addition, the authors might consider including more recent and powerful GPT-4o performance in Table 1.\n\n3. For Table 1, since MLLMs' performance on the association task is rather high (around 80\\% accuracy), there could be concerns about the potential improvement MLLMs can achieve for future works. To understand such, human performance might be a valuable baseline to refer to.\n\n4. The evaluation settings in the synchronous association task in Figure 3 could be difficult to understand. The authors might consider better explaining their inputs and expected outputs and their metric calculation as some pseudo-code.\n\n5. The performance comparison in Figure 3 is not very insightful, which might lacks proper explanations about why MLLMs are significantly inferior to human judgement. Such significant gap could raise concerns about the effectiveness of the proposed evaluation protocol.\n\n6. The illustration of Figure 6 is not very self-explainable, while its textual explanations in context are also very brief, which could hinder readers understanding of the major concerns in current MLLMs on such tasks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. **Rationale for Focus**: What is the rationale behind selecting adjectives and verbs as the focal point for your semantic concepts in the association tasks?\n\n2. **Bias Mitigation**: Regarding equation (1), where \\(z_{ij}\\) is constrained to 0 or 1, could this introduce bias from human evaluations? How can this bias be effectively removed from the assessment?\n\n3. **Overlooked Capability**: Can you elaborate on why you believe the association ability of MLLMs has been overlooked in previous research?\n\n4. **Novelty Clarification**: How does your proposed benchmark differentiate itself from existing benchmarks assessing association and reasoning capabilities in language models?\n\n5. **Bridging Performance Gaps**: Given the performance gap observed, what advancements do you foresee as crucial for future MLLMs to approach human-level performance in association tasks?\n\n6. **Resource Considerations**: How can researchers with limited resources effectively utilize your benchmark, considering its resource-intensive nature?\n\n7. **Model Selection Criteria**: What criteria informed your selection of specific MLLMs for this study, and how do you view these models in the context of the current state of MLLM research?\n\n8. **Future Implications**: How do you envision your findings influencing the design of future MLLMs and the benchmarks used for their evaluation? What next steps do you recommend in this line of research?\n\n9. **Evolving MLLM Performance**: As MLLMs continue to evolve and improve in association tasks, how do you anticipate the relevance of your findings will change?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. **Originality**:\n - The benchmark specifically targeting association capabilities represents a unique contribution to the MLLM literature. While benchmarking is prevalent, focusing on association adds a novel dimension to the evaluation of language models.\n\n2. **Quality**:\n - The authors' annotation-free construction method for association tasks is a practical innovation that alleviates the common challenges associated with extensive manual data labeling, enhancing the quality and usability of the benchmark.\n\n3. **Clarity**:\n - The paper is well-structured and clearly articulated, with straightforward definitions and explanations of association tasks. This clarity facilitates comprehension and accessibility for a broad audience.\n \n4. **Significance**: \n - By highlighting the substantial gap between MLLM performance and human intelligence in association tasks, the paper underscores the importance of developing models that can better mimic human cognitive capabilities. This sets the stage for future research efforts in enhancing MLLMs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel benchmark aimed at evaluating the association capabilities of Multi-modal Large Language Models (MLLMs), a crucial yet often overlooked aspect of human intelligence. The authors formulate a specific association task based on adjective and verb semantic concepts and introduce an innovative annotation-free method for constructing these tasks, minimizing the reliance on expensive data annotation. They implement a rigorous data refinement process to enhance dataset clarity and present three levels of association tasks: single-step, synchronous, and asynchronous. The investigation covers a wide range of MLLMs, including both open-source and closed-source models, exploring various memory strategies and involving human experts. Findings reveal a significant performance gap between current MLLMs, including state-of-the-art models like GPT-4V, and human capabilities in association tasks. It implies that this benchmark could advance future MLLM research." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Limited Novelty**:\n - The benchmarking of association capabilities is not entirely novel. Similar efforts can be found in previous research focusing on common sense reasoning, such as \"CommonSenseQA\" and \"HellaSwag,\" which also evaluate reasoning and associative capabilities.\n\n2. **Subjectivity in Tasks**:\n - Association tasks can be influenced by subjective interpretations, yet the paper does not sufficiently address how such subjectivity is mitigated. Discussion around the prompt design for MLLMs—especially concerning analogy tasks—could enhance the validity of the results.\n\n3. **Resource Intensiveness**:\n - The comprehensive nature of the study, involving multiple models and memory strategies, raises concerns regarding reproducibility. The resource demands may hinder wider adoption among researchers with limited computational resources.\n\n4. **Experimental Design**:\n - The rationale for selecting specific MLLMs could be more thoroughly explained, and comparisons with existing benchmarks would strengthen the justification for the new benchmark's necessity.\n\n5. **Discussion Depth**:\n - The discussion section could delve deeper into the implications of the findings, particularly regarding practical applications and theoretical contributions to AI and cognitive modeling.\n\n6. **Performance Gap Exploration**:\n - While the performance gap between MLLMs and human intelligence is significant, the paper could provide more in-depth analysis of the causes and potential paths toward bridging this gap.\n\n7. **Bias in Dataset Annotation**:\n - The paper lacks clarity on how biases in the initial dataset annotations are addressed, which could affect the robustness of the findings." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How might the authors' findings on the associative capabilities of MLLMs influence the design of future models, particularly in terms of memory and reasoning architectures?\n\nAre there any specific areas of application where the authors believe the current gaps in associative capabilities are particularly problematic, and thus, warrant immediate attention in research?\n\nHow did you ensure that the limitations of the original datasets did not significantly affect the results?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper offers a fresh approach to assessing MLLMs by focusing on their associative abilities, which is a novel contribution to the field. The annotation-free construction method for association tasks is innovative and has the potential to simplify the creation of benchmarks in this area." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a benchmark, using an annotation-free construction method to transform general datasets. The benchmark includes three levels of association tasks: single-step, synchronous, and asynchronous associations. The authors conduct extensive experiments involving multiple open-source and closed-source MLLMs, including state-of-the-art models like GPT-4V, and compare their performance with human experts." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The evaluation is limited to MLLMs’ zero-shot ability in association tasks across adjectives and verb semantic concepts. This may not fully capture the complexity of real-world scenarios where MLLMs are expected to perform.\n\nWhile the paper analyzes failure cases in the association process, it could have provided a more in-depth understanding of why these failures occur. \n\nThe paper focuses on single-step, synchronous, and asynchronous associations, which are complex tasks. However, it might not fully capture the nuances of human associative learning, which often involves more gradual and contextually influenced processes." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "In this paper, we benchmark MLLM's ability on association tasks at various semantic concepts based on an annotation-free association reconstructed method." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024the,\ntitle={The Labyrinth of Links: Navigating the Associative Maze of Multi-modal {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vJ0axKTh7t},\nnote={under review}\n}" }, "abstract": { "value": "Multi-modal Large Language Models (MLLMs) have exhibited impressive capability. However, recently many deficiencies of MLLMs have been found compared to human intelligence, $\\textit{e.g.}$, hallucination. To drive the MLLMs study, the community dedicated efforts to building larger benchmarks with complex tasks. In this paper, we propose benchmarking an essential but usually overlooked intelligence: $\\textbf{association}$, a human's basic capability to link observation and prior practice memory. To comprehensively investigate MLLM's performance on the association, we formulate the association task and devise a standard benchmark based on adjective and verb semantic concepts. Instead of costly data annotation and curation, we propose a convenient $\\textbf{annotation-free}$ construction method transforming the general dataset for our association tasks. Simultaneously, we devise a rigorous data refinement process to eliminate confusion in the raw dataset. Building on this database, we establish three levels of association tasks: single-step, synchronous, and asynchronous associations. Moreover, we conduct a comprehensive investigation into the MLLMs' zero-shot association capabilities, addressing multiple dimensions, including three distinct memory strategies, both open-source and closed-source MLLMs, cutting-edge Mixture-of-Experts (MoE) models, and the involvement of human experts. Our systematic investigation shows that current open-source MLLMs consistently exhibit poor capability in our association tasks, even the currently state-of-the-art GPT-4V(vision) also has a significant gap compared to humans. We believe our benchmark would pave the way for future MLLM studies. $\\textit{Our data and code will be made publicly available.}$" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multi-modal LLM", "Visual Reasoning", "Association" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f4a5335f30710035b0f60a260033a24604905892.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "The Labyrinth of Links: Navigating the Associative Maze of Multi-modal LLMs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vJgJSrYPe1
Logic-Logit: A Logic-Based Approach to Choice Modeling
main
Active
Choice Model;Preference Learning;Interpretability;Rule Learning
interpretability and explainable AI
3;3;6;6
3;3;2;3
2;3;3;3
2;2;2;3
3;1;3;3
4.5
2.75
2.75
2.25
2.5
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Can the authors provide more insight into how the model scales with an increase in the number of features or the size of the dataset? Would any adjustments be necessary in the Frank-Wolfe and column generation steps?\n- How does the model handle cases where decision criteria overlap significantly between customer types? For instance, if preferences are highly correlated between types, does the model tend to overfit to certain rules or ignore relevant variation?\n- Would incorporating neural components in conjunction with logic-based rules (e.g., neural embeddings for complex feature spaces) enhance performance without compromising interpretability? This hybrid approach could be of interest if straightforward rule-based methods struggle with nuanced distinctions in large feature sets.\n- How can this method be applied for RLHF, can authors provide experimental results to demonstrate this? i.e., Finetuing an LLM for a Diffusion models." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The model’s combination of interpretable rule-based choice modeling with optimization algorithms is innovative. The approach’s focus on interpretable, structured rule extraction addresses a significant gap in choice modeling literature, especially relevant for high-stakes domains.\n- The experimental setup is comprehensive, covering synthetic and real-world datasets. Benchmarks with traditional models and neural networks underscore the model’s effectiveness in balancing accuracy and interpretability. The rule extraction and optimization process is detailed and thoughtfully developed, providing clarity on algorithmic decisions.\n- The paper is generally well-organized. The presentation of the OR-of-ANDs rule structure, Frank-Wolfe algorithm, and column generation steps is clear, supporting reproducibility. The inclusion of rule explanations on real-world datasets aids in understanding practical implications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents Logic-Logit, a rule-based interpretable choice model that utilizes logical rules to predict human choices in contexts like healthcare and commercial domains. The authors aim to address limitations in interpretability associated with existing neural network-based models by proposing a model that represents choices through OR-of-ANDs logic rules. These rules enable compact and interpretable representation of human decision-making. The paper introduces an optimization framework using the Frank-Wolfe algorithm combined with column generation to efficiently extract rules, showcasing empirical success in interpretability and accuracy across synthetic, commercial (Expedia Hotel), and healthcare (MIMIC-IV) datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While the column generation and rule pruning strategies manage computational demands, further discussion on the model’s scalability with significantly larger datasets would enhance the paper. For instance, scalability tests on larger commercial datasets would demonstrate practical feasibility in data-intensive domains.\n- The approach involves selecting parameters like the number of rules, rule lengths, and pruning thresholds. More empirical insights into the sensitivity of these parameters on model performance, especially in healthcare contexts, could strengthen robustness claims.\n- While the model shows good accuracy on average, there is less discussion about edge cases, where rule-based logic might oversimplify complex decision boundaries. Addressing potential limitations in handling such cases would provide a more balanced view of the model’s capabilities." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could you estimate the computational cost of the proposed methods, particularly for real-world datasets? \n2. Are there specific conditions or data characteristics under which this model is computationally more efficient or challenging?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-organized and motivated. Its focus on interpretable human preferences is valuable for understanding human decision-making, particularly in high-stakes areas like healthcare and autonomous driving, where trust and transparency are essential.\n2. The approach to modeling human choice could also inspire advancements in fields like automated reasoning (the task authors have verified), where a clear understanding of decision-making processes is crucial." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an approach to modeling human choice using OR-of-ANDs logic rules. The authors illustrate how any preference can be transformed into a Boolean logic formula, with rules mapped to Boolean space and interconnected through AND and OR operators. This formula is then incorporated into a mixed logit choice model, enabling preference learning. However, this approach results in an infinite-dimensional optimization problem, a major computational challenge. To address this, the authors apply the functional conditional gradient method (Frank-Wolfe) to reduce the optimization’s complexity. Additionally, due to the exponential size of the search space ($2^M - 1$, where $M$ is the number of rules), they use a column generation technique that incrementally expands the search space by adding new rules in each step. Empirical experiments on both synthetic and real-world datasets (from commercial and healthcare domains) highlight the effectiveness and versatility of this approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While innovative, the current solution appears complex, particularly due to the combined use of the functional conditional gradient method and column generation. This complexity may limit its applicability or make implementation challenging for practitioners. A more streamlined or efficient approach could enhance the method’s usability across a wider range of real-world applications.\n2. The search space of combinatorial rules is increased exponentially number of rules $M$, which needs to be further reduced to accelerate the learning process." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Item $j$ and $S_t$ are introduced but not used in Equation (1), which creates some confusion.\n- If $p_m$ returns 1 when a condition is met and $x_s$ is a single item, does that mean the resulting conjunctions reduce to just $x_s$?\n- The implementation details of BFS and DFS in the algorithm are unclear, as no specifics or pseudocode are provided. Pseudocode of the algorithm would be helpful with all conditions.\n- The notation $\\mathcal{X}$ is used but not defined.\n- Some figures (1,2) are included but not referenced within the text.\n- Could you clarify the rationale behind the rule distance metric? How would the distance between two unrelated features be interpreted?\n- Why do the neural network (NN) models lack product features?\n- Why aren’t there results for NN-based models on the synthetic dataset?\n- The paper does not provide details on how the baseline methods were trained. What are their hyperparameters and how were they trained?\n- Other discrete choice models, such as graph-based approaches [1], mixed logit models, and network formation models [2], are not included for comparison, which might help comparing tradeoffs between methods.\n\n[1] Tomlinson, Kiran, and Austin R. Benson. \"Graph-based methods for discrete choice.\" Network Science 12.1 (2024): 21-40.\n\n[2] Gupta, Harsh, and Mason A. Porter. \"Mixed logit models and network formation.\" Journal of Complex Networks 10.6 (2022): cnac045." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The authors tackle a crucial question in designing interpretable and explainable models.\n- They present an interpretable algorithm tailored for choice modeling.\n- The method is intuitive and demonstrates strong performance.\n- The proposed approach outperforms baseline methods across 2 datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the development of choice models by constructing a rule-based approach using logical statements that dissect conjunctions. The proposed method employs a dual-optimization process: an outer optimization for determining preference weights, and an inner optimization to identify new rules. The model iteratively refines the rule set by incorporating each newly discovered rule. Assuming convex optimization, the authors apply the Frank-Wolfe algorithm to update preference weights across all rule types. Experiments on a synthetic dataset and the Expedia Hotel Dataset demonstrate that the proposed method consistently outperforms all baseline models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The method does not appear scalable to a large number of features (predicates). What is the computational complexity of this approach?\n \n- The experimental details, including hyperparameters (e.g., exact convergence condition used) for the proposed algorithm, are missing and were not found in the Appendix. Significant methodological details are lacking.\n\n- Although logic rules enhance interpretability, they may not be applicable for designing all types of features." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses [a] [b]." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is generally well-written and presents a new explainable choice model with a learning framework. Numerical results indicate strong interpretability and predictive performance, tested on both synthetic and real-world datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces the Logic-Logit, an interpretable, rule-based choice model, along with a learning pipeline built on column generation techniques and the Frank-Wolfe algorithm. The authors also provide numerical results demonstrating the model's interpretability and predictive accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "[a]There is no discussion regarding the model's capacity. It is unclear whether this model falls under the mixed-logit model or if it is equivalent to a mixed-logit model. If so, why not use the approach in [1] to learn a mixed-logit, given its stronger analytical properties, such as provable convergence guarantees? (Also, the mixed-logit's identified customer segments and the parameters in each segment could also be viewed as the interpretation of the choice?)\n\n[b]I'm confused about if the learning requires prior knowledge of $M$ and $\\{p_m\\}$. For instance, Line 177 mentions, “These mappings, from real-valued features to predicates, are predefined and fixed,” and the learning problem (3) does not appear to learn $\\{p_m\\}$. Yet, in the experiments, the algorithm identifies these rules. This raises questions about the rule search strategies outlined in lines 308-323, and a pseudo-code would help clarify this process.\n\n[c]The literature review lacks a discussion of consider-then-choose models. Since the proposed model follows this approach, it would be beneficial to compare it with other models in this category, such as [2] and [3].\n\nMinor comments:\n\n[a]Given that interpretability is a key advantage of the proposed model, it would be helpful to compare it with other interpretable choice models, such as [4].\n\n[b]Including the variance/std in the experimental results would provide additional insight.\n\n[c]Some potential typos: line 192 should be “item $s$ ” instead of “item $j$”, and also “offer set $S$” instead of “offer set $S_t$”.\n\n\nReferences\n\n[1]Hu, Yiqun, David Simchi-Levi, and Zhenzhen Yan. \"Learning mixed multinomial logits with provable guarantees.\" Advances in Neural Information Processing Systems 35 (2022): 9447-9459.\n\n[2]Liu, Qing, and Neeraj Arora. \"Efficient choice designs for a consider-then-choose model.\" Marketing Science 30.2 (2011): 321-338.\n\n[3]Akchen, Yi-Chun, and Dmitry Mitrofanov. \"Consider or Choose? The Role and Power of Consideration Sets.\" arXiv e-prints (2023): arXiv-2302.\n\n[4]Tomlinson, Kiran, and Austin R. Benson. \"Learning interpretable feature context effects in discrete choice.\" Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining. 2021." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024logiclogit,\ntitle={Logic-Logit: A Logic-Based Approach to Choice Modeling},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vJgJSrYPe1},\nnote={under review}\n}" }, "abstract": { "value": "In this study, we propose a novel rule-based interpretable choice model, {\\bf Logic-Logit}, designed to effectively learn and explain human choices. Choice models have been widely applied across various domains—such as commercial demand forecasting, recommendation systems, and consumer behavior analysis—typically categorized as parametric, nonparametric, or deep network-based. While recent innovations have favored neural network approaches for their computational power, these flexible models often involve large parameter sets and lack interpretability, limiting their effectiveness in contexts where transparency is essential.\n\nPrevious empirical evidence shows that individuals usually use {\\it heuristic decision rules} to form their consideration sets, from which they then choose. These rules are often represented as {\\it disjunctions of conjunctions} (i.e., OR-of-ANDs). These rules-driven, {\\it consider-then-choose} decision processes enable people to quickly screen numerous alternatives while reducing cognitive and search costs. Motivated by this insight, our approach leverages logic rules to elucidate human choices, providing a fresh perspective on preference modeling. We introduce a unique combination of column generation techniques and the Frank-Wolfe algorithm to facilitate efficient rule extraction for preference modeling—a process recognized as NP-hard. Our empirical evaluation, conducted on both synthetic datasets and real-world data from commercial and healthcare domains, demonstrates that Logic-Logit significantly outperforms baseline models in terms of interpretability and accuracy." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Choice Model", "Preference Learning", "Interpretability", "Rule Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d5c48e3116241b1f2e351e7a161906689a142aa0.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Logic-Logit: A Logic-Based Approach to Choice Modeling" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vJkktqyU8B
Memory Efficient Transformer Adapter for Dense Predictions
main
Active
Vision Transformer;Vision Transformer;Transformer
transfer learning, meta learning, and lifelong learning
5;5;6
4;4;4
2;3;3
3;2;3
2;3;3
5.333333
4
2.666667
2.666667
2.666667
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In Table 2 and Table 3, models of different sizes have the same Memory Consumption (MC). What specific quantity does MC describe, and what measurements lead to this phenomenon?\n2. For different sizes of variants of META, are there differences in the implementation details?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The method proposed in this work is simple but effective, achieving higher performance and efficiency in various classic detection and segmentation frameworks.\n2. The paper provides clear and understandable descriptions of the details of each module in the MEA block, with the design purposes of each module being clear and effective." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes META, an efficient ViT Adapter that enhances ViT in dense prediction tasks. The adapter block MEA provides the local bias required for image tasks to ViT by introducing conv branches, and significantly reduces memory time consumption by minimizing reshape operations on tensors in the adapter. In classic dense prediction tasks such as Object Detection, Instance Segmentation, and Semantic Segmentation, META outperforms previous adapter methods in terms of fewer parameters and lower memory consumption. Ablation experiments were conducted to verify the effectiveness of the three modules in the MEA block and the improvement of the model with the MEA cascade." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There is still space on the main text pages, but the implementation parameters of the model are not clarified, such as the number of cascades. Different designs of each size are also not specified." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In your experimental section, you have conducted in-depth explorations of the three tasks: object detection, instance segmentation, and semantic segmentation. To more intuitively demonstrate the specific improvements brought by your model in handling these tasks, are there any relevant visualization results to support this?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper presents a simple and fast ViT adapter named META, which addresses the critical yet underexplored issue of memory inefficiency. The quality of this paper is supported by theoretical foundations and empirical validations across various tasks and datasets, demonstrating that META outperforms state-of-the-art models in terms of accuracy and memory usage. The paper is structured clearly, with detailed architectural descriptions and clear explanations of the proposed motivation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the limitations of Vision Transformer (ViT) adapters in dense prediction tasks, particularly focusing on the issues of memory inefficiency and slow inference speed caused by frequent reshaping operations and normalization steps. The paper proposes a novel ViT adapter named META, which introduces a memory-efficient adapter block that enables the sharing of normalization layers between the self-attention layer and the feed-forward layer. Furthermore, a lightweight convolutional branch is added to enhance the adapter block. Ultimately, this design achieves a reduction in memory access overhead." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In the Atte Branch discussed in this paper, the adoption of the cross-shaped self-attention (CSA) mechanism is a pivotal factor in effectively reducing the frequent reshaping operations of the model. However, the current analysis lacks an in-depth comparison and discussion between CSA and other efficient attention mechanisms, failing to fully elaborate on why the selection of CSA achieves the current experimental results. \n\nThe ablation analysis in this paper are currently limited to the results of instance segmentation on the MS-COCO dataset, whereas your previous experimental work also encompassed the tasks of object detection and semantic segmentation. Therefore, the current ablation analysis regarding the components of the proposed module has certain limitations in terms of generalization. To more comprehensively evaluate the effectiveness and universality of the module components, I recommend conducting corresponding experimental validations for all three tasks of object detection, instance segmentation, and semantic segmentation, thereby ensuring the accuracy and applicability of the conclusions obtained.\n\nIn this paper, there is an inconsistency in the presentation, specifically between Formula (1) and part (a) of Figure 2, which do not align accurately. Although you have explained later in the text that the channel concatenation step for Fsp and Fvit is omitted in the formula, this omission may still lead to misunderstandings among readers. To ensure clarity and accuracy of the content, we recommend that the two should correctly correspond to each other." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see weakness section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. META introduces a cross-shaped self-attention mechanism and a cascaded process, both of which are grounded in the principles of dividing the entire feature into multiple smaller features to reduce memory costs.\n2. META incorporates local inductive biases by introducing convolutions into the FFN and an additional lightweight convolutional branch. This enables META to achieve better performance in extensive experimental evaluations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a Memory-Efficient Transformer Adapter, termed META, which reduces memory access costs by sharing layer normalization across multiple modules and substituting standard self-attention with cross-shaped self-attention. Meanwhile, META divides the feature map into smaller parts along the channel dimension and processes these smaller features sequentially. Thereby further reducing memory requirements. Experiment results in object detection and instance segmentation indicate that META achieves better accuracies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Insufficient Motivation (1): META claims that the inference speed of previous adapters is hindered by inefficient memory access operations such as normalization and frequent reshaping, but it lacks experimental analysis to support this claim. It is recommended to provide a detailed breakdown of inference time to show the proportion of inefficient memory access operations in META and previous methods.\n2. Insufficient Motivation (2): META aims to decrease memory access costs by reducing frequent reshaping operations. However, I do not observe any reduction. First, the input for attention and layer normalization is $x\\in R^{B\\times L\\times C}$, where B, L, and C denote the batch size, number of tokens, and channels, respectively. In contrast, the convolution accepts input in the format $x\\in R^{B\\times C\\times H\\times W}$. The MEA block mixes many convolutions, layer normalization, and attention. This may result in multiple tensor reshaping operations. Second, the cross-shaped self-attention mechanism divides the features into non-overlapping horizontal/vertical stripes, further compounding the need for tensor reshaping operations. I conjecture that the observed lower memory access costs during experiments are due to the segmentation of the entire feature into multiple smaller features, instead of reducing tensor reshaping operations. I'd like to see a thorough analysis of memory costs associated with each operation in META and previous approaches. This will help clarify where the memory saving comes from.\n3. The results of the ablation study presented in Table 4 indicate that convolutional layers are primarily responsible for the observed improvements (FFN also includes MLP composed of two 3x3 convolutional layers). This raises the question: to what extent does the Attention Branch contribute to these improvements? Consider conducting an additional ablation study that includes the ViT-B along with the FFN Branch, maintaining the same configuration as described in Line 435, but excluding the Attn Branch.\n4. The proposed META is relatively sophisticated and comprises numerous layers (e.g., the cascaded injector includes 16 layers), making it less practical for low-performance hardware. On which hardware do you measure FPS? It is recommended to compare META with other methods on less powerful GPUs such as the V100, rather than A100 or H100.\n5. In Table S3, how do you compare other efficient attention methods? Do you only replace the attention mechanism in the ViT-adapter with other attention mechanisms? Please provide further details regarding the experimental setup.\n6. Other minor comments.\nLine 150:The spatial prior requires clarification, is the spatial prior module utilized here identical to that in the ViT-adapter [1]? \nLine 96: TDE Transformer, DeiT is more frequently used. \nLine 182: The term \"which\" appears to ambiguously refer to the prior module rather than the MEA block; it would be beneficial to provide clarification.\nLine 188: In Equation 1, \"Concat\" is a widely recognized abbreviation for concatenation.\nLine 199: \"Attn\" is a more commonly accepted abbreviation for attention compared to \"Atte\". \nLine 223: Should \"respectively\" be replaced with \"sequentially\"? \nLine 166 Table S2 in Supplementary Materials: Do you mean separate normalization for different modules? The use of \"common\" may introduce ambiguity.\n\n[1] Zhe Chen, Yuchen Duan, Wenhai Wang, Junjun He, Tong Lu, Jifeng Dai, and Yu Qiao. Vision transformer adapter for dense predictions. In ICLR 2023." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "In this paper, we propose META, a straightforward and high-speed ViT adapter that enhances the model's memory efficiency and reduces memory access time by minimizing inefficient memory access operations." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024memory,\ntitle={Memory Efficient Transformer Adapter for Dense Predictions},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vJkktqyU8B},\nnote={under review}\n}" }, "abstract": { "value": "While current Vision Transformer (ViT) adapter methods have shown promising accuracy, their inference speed is implicitly hindered by inefficient memory access operations, e.g., standard normalization and frequent reshaping. In this work, we propose META, a simple and fast ViT adapter that can improve the model's memory efficiency and decrease memory time consumption by reducing the inefficient memory access operations. Our method features a memory-efficient adapter block that enables the common sharing of layer normalization between the self-attention and feed-forward network layers, thereby reducing the model's reliance on normalization operations. Within the proposed block, the cross-shaped self-attention is employed to reduce the model's frequent reshaping operations. Moreover, we augment the adapter block with a lightweight convolutional branch that can enhance local inductive biases, particularly beneficial for the dense prediction tasks, e.g., object detection, instance segmentation, and semantic segmentation. The adapter block is finally formulated in a cascaded manner to compute diverse head features, thereby enriching the variety of feature representations. Empirically, extensive evaluations on multiple representative datasets validate that META substantially enhances the predicted quality, while achieving a new state-of-the-art accuracy-efficiency trade-off. Theoretically, we demonstrate that META exhibits superior generalization capability and stronger adaptability." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Vision Transformer", "Vision Transformer", "Transformer" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a535a28814eb5c55887d96d437cccd0858511057.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/8db1bfa2e03b48aaf43abe94b648a76d139c3772.pdf" }, "title": { "value": "Memory Efficient Transformer Adapter for Dense Predictions" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vJmpg0exYA
DiscQuant: A Quantization Method for Neural Networks Inspired by Discrepancy Theory
main
Active
Quantization;Discrepancy Theory;LLMs;Weights Only Quantization
infrastructure, software libraries, hardware, systems, etc.
3;3;6;6
5;4;2;3
3;3;3;3
2;2;3;4
3;3;3;3
4.5
3.5
3
2.75
3
-0.894427
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How does DiscQuant do if the gradient covariance does not have strong low rank structures? In other words, if the low rank structure of the gradient covariance does not hold, are there alternative strategies by which we could adapt or extend DiscQuant so that the discrepancies may continue to be effective? Perhaps by modulating the discrepancy-based constraints? I'd appreciate any insight into whether or not DiscQuant is extensible to other model architectures and distributions.\n\n2. Of relevance to GPTQ and RTN, it would be interesting to see other baselines of recent data-dependent rounding techniques, such as CDQuant and AdaQuant, which optimize the quantization error. This would give a more thorough view of the possibilities and compromises with DiscQuant. Could the authors include these new baselines in future work or some ideas on how DiscQuant would theoretically compare with them?\n\n3. The proposed method, DiscQuant, basically comprises an optimization loop. How do the computational and memory costs of DiscQuant compare to the alternatives, like RTN and GPTQ? Is the latency or memory overhead for quantization drastic? How would these effects lead to deployment challenges for applications that have specific real-time requirements or are large-scale models? A careful study of those practical tradeoffs would be useful for understanding how this approach is actually feasible in the production setting.\n\n4. This paper relies upon the discrepancy theory as a basis for its rounding strategy, but not all readers will be as familiar with discrepancy theory as the authors. Could the authors provide additional explanation for why discrepancy theory is especially well-suited to this problem? Moreover, explanations of the concepts random walk and convex polytope would be more transparent and therefore better for readers who are unaware of these topics if they were illustrated with an example, perhaps also simplified, to bridge the gulf between such a theoretical approach and practical application.\n\n5. In DiscQuant, the random walk approach by inspiration of the Lovett-Meka algorithm finds some feasible vertex in the polytope. Could the authors provide more intuition behind why this method is useful for quantization? For instance, does one actually need the random walk to get low generalization error, or might possibly much simpler methods for finding a vertex of the polytope work comparably? Such intuition would help clarify the design choices and could perhaps lead to avenues for simplifications.\n\n6. The paper employs KL divergence as a metric to be minimized to reduce the gap between the original and a quantized model. Would the authors consider alternative metrics, such as MSE or other activation-based losses, for further generalizing the properties of DiscQuant? The better understanding of loss formula interactions would allow practitioners to fine-tune this method for specific applications.\n\n7. Although DiscQuant is only tested on text-based tasks with Phi-3-mini and Meta-Llama models, the authors might highlight the scope of potential application of this work with CNNs or transformers for vision. Are there any architectural or task-specific restrictions such that the approach of DiscQuant needs to be adjusted? Further research in that would demonstrate the extensibility of DiscQuant and what needs to be changed to allow the approach for other applications.\n\n8. The authors assume in practice that the gradient space is low rank, an assumption they verify empirically with certain architectures. Do they have additional insight or data on how DiscQuant behaves with higher-rank gradient spaces over datasets or models? Elucidation of any limits or change in performance within such settings would help one understand whether the assumptions for the method are generalizable." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper contributes a new input to the field of neural network quantization by pursuing a novel angle inspired by discrepancy theory in tackling the rounding problem. Traditional methods of quantization lie on two strands: Either the quantization grid must be designed efficiently, or the standard rounding technique, Round-to-Nearest, must be used. In its work, this paper introduces discrepancy theory for optimizing the rounding step and shows it can get high performance with all weights rounded to almost all possible bits but with a rather small approximation error. This result is established in a theoretical framework that automatically ensures low error, such as in the low-rank covariance matrices of the gradients, and so on. Therefore, this problem formulates a relatively little-investigated area of quantization in a new way.\n\nThe quality of this paper is very good in terms of methodology, with sound theoretical underpinnings for the proposed DiscQuant method to work. The authors systematically derive bounds on generalization error, effectively coupling their method with gradient covariance properties empirically validated. This is further complemented by a robust experimental setup in which DiscQuant is tested in several neural architectures (Phi-3-mini-3.8B and Meta-Llama-3.1-8B) along with quantization formats (block scaling, incoherence processing) with clear evidence that it surpasses the state-of-the-art techniques already in place like GPTQ and RTN. The results are very diversified regarding the extended coverage of benchmarks and quantization levels and therefore effectively demonstrate the applicability of DiscQuant over a wide scope and strength.\n\nThe paper is very clear in both its structured presentation of the theoretical framework and the algorithmic details. The authors do an excellent job in delineating the motivation behind DiscQuant as well as the central ideas of discrepancy theory. Figures and tables are well incorporated to help understanding in clear ways of how DiscQuant differs from other methods. While portions of the math development are tough slogging, the authors offer explanations that make intuitive sense, such as in the interpretation of results within the context of quantization grids and convex polytopes, which makes the theoretical contributions accessible to those readers with a strong technical background.\n\nThis work can be the first to provide significant influence for further work in quantization research, especially for large language models, where post-training quantization has to be very efficient for deployment on memory-constrained devices. At the same time, it frames quantization as a discrepancy problem and finds a practical rounding algorithm that achieves high compression with low loss in accuracy; thus, we contribute to new approach affecting far beyond LLMs and ranging from model deployment over mobile and embedded environments. This work further opens a route of further research into discrepancy theory in neural network quantization, inviting work that itself may eventually lead to very efficient quantization techniques. For value along the dimension of presentation, the paper is a highly significant and valuable addition to the field, advancing our understanding of effective quantization for modern, large-scale models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new method of quantizing neural networks with inspiration from theory of discrepancies. Traditionally, quantization in neural networks is composed of two processes, namely defining a grid of quantization and rounding model weights to match that particular grid. Standard rounding procedures are typically RTN, alongside data-dependent methods, some of which include GPTQ; however, in this paper, the round the weights using the theory of discrepancies approach without resulting in an increase in the loss on unseen data.\n\nDiscQuant relies on a mathematical framework to guarantee low error through rounding of nearly all model weights based on a low-rank assumption of gradient covariance. The authors establish theoretical bounds that their method can be guaranteed to achieve the expected generalization error to be at most epsilon on the data distribution, conditioned on certain low-rank conditions being met in the gradient space. They use such theoretical results to design a practical rounding algorithm that rounds the model weights from such an optimizer in a way that minimizes a regularized objective function combining KL divergence and linear constraints; it thus preserves overall model performance.\n\nExtensive experiments on Phi-3-mini-3.8B and Meta-Llama-3.1-8B models across tasks and quantization levels indicate the superior performance of DiscQuant against RTN and GPTQ specifically at low bits. The authors are able to show that DiscQuant has gained in performance over benchmarks GSM8k, ARC Challenge, and PIQA, which then comes to expose its generalizability and robustness over any possible format of quantizations such as block scaling and incoherence processing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While this paper has meaningful contribution, there are areas where it can improve with theoretical explanations of the methods, experimental validations, and practical applicability.\n\nOne area for improvement concerns the theoretical justification of why the proposed method is better than other data-dependent rounding methods, like GPTQ, in settings where the gradient covariance is not strongly low-rank. The authors assume low-rank covariance to support the theoretical bounds for DiscQuant, though it remains unclear how the method would perform when such an assumption fails to hold. Discussing some cases where the low-rank assumption is violated and offering either theoretical insights into potential limitations or proposing ways to adapt DiscQuant for such cases would strengthen the contribution. This would be an opportunity to refer to alternate bounding techniques that would account for the high-rank scenarios, or comparisons with approaches based on other assumptions.\n\nThe experimental evaluation was exhaustively done for standard models, but some additional comparisons of alternative data-dependent rounding techniques that are also effective in PTQ contexts may be very helpful. For example, newer approaches like CDQuant or AdaQuant-all seek to minimize quantization error in a data-dependent optimization manner-would make good baselines. Including the experiments that were run in those methods would actually give more background about how the comparative performance of DiscQuant goes along with an outline of relative advantages. Experiments on further tasks beyond text generation and multiple-choice questions - such as real-time inference on mobile devices or edge computing environments - should further enable these results to generalize. Extensions in this direction will unveil the flexibility and possible trade-offs of DiscQuant across different practical applications.\n\nWhile clear on the whole, some aspects of the presentation of discrepancy theory in the paper could have been clearer for readers not familiar with discrepancy theory, such as explaining quantization in terms of how discrepancy theory lends itself to being quantized. As it stands now, it is not very intuitive especially to readers not familiar with convex polytope to connect the random walk the Lovett-Meka algorithm used and the rounding process in DiscQuant. The connection is necessary to be explained further in order to make the text more readable and understandable.\n\nIn conclusion, though DiscQuant strongly performs well on all benchmarks, it may extend its discussion regarding the practical impact of the deployment of DiscQuant in real-world applications. For example, since the approach is iterative, even the computational efficiency along with memory usage in comparison to lesser approaches such as RTN may be much beneficial. Examined in greater detail, the amount of increased computational expense in terms of overhead in the form of time or memory could shed light on the trade-offs between using DiscQuant and alternative approaches. Second, some comments on ease of implementation, especially in comparison to such widely used approaches as GPTQ, might be useful for practitioners testing its practical utility.\n\nIn summary, what the paper contributes is a good contribution, but it requires theoretical discussion on the possible limitations of the low-rank assumption, more experiments on a variety of baselines and tasks, clearer depiction of the role of discrepancy theory, and more practical insights into trading off deploying DiscQuant in a variety of real-world settings, thereby making the results of this paper more comprehensive, accessible, and applicable across a wider range of contexts." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "One of my biggest questions that remained unresolved while reading the paper was, when discussing gradient covariance matrix, are we talking about the covariance within a fully connected layer, or could it be between different layers? In general, there is some ambiguity as to what “n” in the parameter space really entails here. Is it the full parameter space (all parameters together), or is this procedure applied per each fully connected layer? If it is the latter, the next question is, in which order are these soundings applied? And would the authors re-calculated the gradients after each step? \n\nOn a related note, if this procedure is done iteratively for different fully connected layers, can authors expand on this? For example, do they quantize every layer as if other layers are in the original form, or do they have an iterative approach where the the quantization of first layer impacts the quantization of the subsequent layers. \n\nIf this is done in one shot, is that a complex operation? In general, I would also highly welcome some notes on complexity of running this algorithm." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "In general, I find this work to be very strong, as it works on an important practical problem, and presents an innovative appraoch that is both theoretically grounded and is practically impactful. Let me enumerate these strengths one by one:\n- The recognition of the problem with exiting quantization methods, in that they ignore the importance of the rounding step and only focus on the quantization grid, seems to be a highly relevant and important message of this paper\n- After recognizing this problem, authors propose a very nice and innovative approach by formulating the problem in terms of discrepancy theory, which in hindsight, seems like an excellent choice for this problem\n- The assumptions necessary for the theory, seems to be well substantiated and reasonable, and authors go to reasonable length to explain and justify them, rather than hiding them. \n- The empirical results of the paper are equally strong as the theoretical results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes \"DiscQuant\", a method to quantize weights of a large language model, which aims to improve the quantization-accuracy trade-off. The authors argue that quantization can be roughly thought of as two steps: 1) coming up with a good quantization grid, 2) a good rounding scheme that maps the exact parameters to a point in the discrete grid. Authors argue that while there ha been much effort on step 1, there is less focus on step 2, which is the rounding procedure. The key theoretical ingredient of the paper is to formulate the problem of rounding in the framework of discrepancy theory. In particular, they aim to bound the errors introduced due to rounding for all seen and unseen samples\n\nAfter formulating the problem in an exact manner, which is roughly the KL divergence between unrounded and rounded model predictions, they go on to make several assumptions: 1) they assume that a simple rounding up/down suffices, and there is no needs for \"jumps\" in the rounding, which becomes more reasonable if the grid is fine enough. 2) they assuming a particular low-rank structure about the weight gradients, and that gradients are well behaved (defined as $\\beta$-reasonable). This is key to achieve an unseen or generalization error bound. 3) They assume that the first order approximation to the error is sufficient for calculating the errors. Here, they stress the fact that while loss gradients averaged over samples may be small, the per sample gradients are not small and in fact dominate the errors due to rounding. \n\nAfter making thee assumptions, the paper goes on to state the main theorem of the paper, Theorem 3.3, which gives the guarantee for generalization error, and subsequently introduce the algorithm that finds such a rounding efficiently in section 4. Finally, the paper presents, what seems like very compelling evidence that their proposed quantization scheme outperforms two baselines (RTN and GPTQ, see Figure 2). They also empirically test two key assumptions, the first order error approximation in Figure 3, and the low-rank structure of gradients in Figure 4, which substantiates why their theoretical results are applicable." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main criticism of the current draft is its lack of clarity on some of the technical/theoretical parts of the paper. \nFor example, the paper would benefit a lot from an expanded explanation of the basics, ie, the basic discrepancy theory setup that they are casting their problem to, explaining the e Lovett-Meka algorithm that they are invoking so many times in detail, and then explaining what problems (complexity perhaps) it has, and what they do to fix it. Currently, it seems like the paper assumes the reader is already familiar with all thee topics and they only present bits that are novel. For reference, I spent nearly 1 hour trying to catch up with thee basics, namely the Lovett-Meka algorithm, but still only partially understood the technical details of the paper. \n\nEven after an expanded explanation on the theory basics, I think the paper needs to give more intuitive high level view of the algorithm. In particular, in section 4, there could be more explanations on what is the idea behind this formulation/heuristic of minimizing along arbitrary direction $c$? Perhaps a geometric intuition, similar to the one given in Fig 1, could be given here? \n\nAnother point is the lack of clarity on the complexity of the proposed approach. From my limited understanding, the Lovett-Meka algorithm involves iteratively solving a SDP, which could be highly expensive in some cases. It sounds like this paper addresses some of these complexity issues via the heuristics (eg, lines 371-272). But it's hard to fully understand the solution, if the reader hasn't fully understood the problem. So an expanded section on background methods and their complexity, and then the complexity of the heuristics-based approach, would help the reader quite a bit." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The result in the main Theorem 3.3 only depends on the data distribution. In practice, the PTQ performance also depends on the value distribution of weight matrices. If values in weights are evenly distributed, the performance after PTQ is usually better. The authors also mentioned incoherence processing, a popular method to reduce the ranges of weights. I am very curious why the distribution of weights is not reflected in the main theorem 3.3 for generalization error. \n2. Could authors explain more about how to solve the problem (3) in their algorithm? In my understanding, we may need to run backpropagation and optimize the problem (3). If that is the case, the cost is almost the same as training the full model which is too much for PTQ compared to other existing algorithms. By using the resources, people can directly run distillation or quantization-aware training for better compression and better performance. \n3. A follow-up question, can authors provide time and memory costs for the proposed algorithm?\n4. Nowadays, there are lots of new algorithms for quantizing neural networks. It is better if authors can compare their algorithm with those works. Some new algorithms are listed in the following: \n * Zhang, Aozhong, et al. \"MagR: Weight Magnitude Reduction for Enhancing Post-Training Quantization.\" arXiv preprint arXiv:2406.00800 (2024).\n * Shao, Wenqi, et al. \"Omniquant: Omnidirectionally calibrated quantization for large language models.\" arXiv preprint arXiv:2308.13137 (2023).\n * Chee, Jerry, et al. \"Quip: 2-bit quantization of large language models with guarantees.\" Advances in Neural Information Processing Systems 36 (2024).\n * Liu, Zechun, et al. \"SpinQuant--LLM quantization with learned rotations.\" arXiv preprint arXiv:2405.16406 (2024).\nThe results shown in the paper seem to be worse than the reported results in these most recent algorithms. It will be great the authors can choose some of them to make comparisons and explain the performance gap." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Theoretical analysis is solid. The paper provides a solid theoretical analysis to study the generalization error of quantization. \n2. Connection with discrepancy theory. It is novel to apply techniques of discrepancy theory to neural network quantization." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studied the generalization gap of quantization using techniques from discrepancy theory under the assumption that the gradient is approximately low-rank. Based on theoretical analysis, the authors proposed a new quantization algorithm, named DiscQuant. Experiments are conducted to compare the proposed algorithm with existing quantization algorithm, like RTN and GPTQ." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The weaknesses of the paper mainly come from the numerical algorithm and experiments. \n\n1. The proposed algorithm solves the optimization problem (3). It seems like full-model training is required to solve the problem (3), which can be too expensive for state-of-the-art large language models. \n2. The authors only compare the proposed approach with RTN and GPTQ, which are relatively old PTQ algorithms in the area. The results seem to show a big gap between the SOTA PTQ methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "It would be great to hear the authors' comments on the points raised above (see **Weaknesses**). In addition,\n\n1. Lines 249-251: \"We make this assumption because we don’t want to change any parameter of the original model too much during quantization, consider it an important property of algorithms we design\" -- could the authors further explain this design choice, and the drawbacks of potentially relaxing this restriction?\n2. do the authors plan to release their code?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written and provides sufficient theoretical results using discrepancy theory (§3).\n2. The proposed algorithm is agnostic to the quantization grid, which makes it quite generally applicable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes DiscQuant, a data-driven rounding algorithm for post-training quantization (PTQ) of large neural networks. DiscQuant assumes that the gradients of the original model are low-rank. Under this assumption, the authors prove that their algorithm can arbitrarily minimize the upper bound on the error of the quantized model, given an accordingly large number of samples from the target data distribution.\n\nThe algorithm is primarily concerned with the second (rounding) step in quantization. The rounding process aims to minimize the KL divergence between the distributions of the next token predictions of the original and quantized models. The proposed algorithm significantly improves against the existing state-of-the-art when quantizing LLMs like Phi-3-mini-4k-instruct and Meta-Llama-3.1-8B-Instruct." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While I acknowledge the contributions made in this work, I hesitate to place a higher score here because of the following reasons.\n\n1. The scope of the currently presented applications is limited to LLMs. I feel that if VLMs (e.g., Llava-v1.6-34b) were to be tested, consistent results there would greatly increase the paper's relevance.\n2. Even within LLMs, the tested models seem quite small. Quantization is concerned with memory efficiency, so it makes sense to perform experiments with large models (e.g., Llama-3.1-70B-Instruct) that would pose greater storage challenges than the ones tested in the current work. Understandably, as the authors note (lines 405-406), DiscQuant requires two copies of the model to be stored in memory during the quantization process. This limits the scope for large-scale experiments in academic settings, but it also highlights a major shortcoming of the approach.\n3. The method needs access to data, which might be the bottleneck for many practitioners who, for instance, quickly need to prototype a handful of models but don't have the resources to run the full models, neither the data to quantize them using a DiscQuant-like algorithm. Besides, as the authors note in the last paragraph, the choice of data is in itself non-trivial. \n\n**Minor issues**\n\n1. The authors may want to confine the abstract to one paragraph in the interest of adherence to the ICLR 2025 guidelines.\n2. Line 133: let's --> lets?\n3. Line 379: close *to* the polytype?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We develop a rounding method for quantizing LLMs based on discrepancy theory." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024discquant,\ntitle={DiscQuant: A Quantization Method for Neural Networks Inspired by Discrepancy Theory},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vJmpg0exYA},\nnote={under review}\n}" }, "abstract": { "value": "Quantizing the weights of a neural network has two steps: (1) Finding a good low bit-complexity representation for weights (which we call the quantization grid) and (2) Rounding the original weights to values in the quantization grid. In this paper, we study the problem of rounding optimally given any quantization grid. The simplest and most commonly used way to round is Round-to-Nearest (RTN). By rounding in a data-dependent way instead, one can improve the quality of the quantized model significantly.\n\nWe study the rounding problem from the lens of \\emph{discrepancy theory}, which studies how well we can round a continuous solution to a discrete solution without affecting solution quality too much. We prove that given $m=poly(1/\\epsilon)$ samples from the data distribution, we can round all but $O(m)$ model weights such that the expected approximation error of the quantized model on the true data distribution is $\\le \\epsilon$ as long as the space of gradients of the original model is approximately low rank (which we empirically validate).\n\nOur proof, which is algorithmic, inspired a simple and practical rounding algorithm called \\emph{DiscQuant}. In our experiments, we demonstrate that DiscQuant significantly improves over the prior state-of-the-art rounding method called GPTQ and the baseline RTN over a range of benchmarks on Phi3mini-3.8B and Llama3.1-8B. For example, rounding Phi3mini-3.8B to a fixed quantization grid with 3.25 bits per parameter using DiscQuant gets 64\\% accuracy on the GSM8k dataset, whereas GPTQ achieves 54\\% and RTN achieves 31\\% (the original model achieves 84\\%)." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Quantization", "Discrepancy Theory", "LLMs", "Weights Only Quantization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/64956e93ff9220e69dd68d72441ec2eb902b9976.pdf" }, "presentation": null, "primary_area": { "value": "infrastructure, software libraries, hardware, systems, etc." }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "DiscQuant: A Quantization Method for Neural Networks Inspired by Discrepancy Theory" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vJwjWyt4Ed
Learning View-invariant World Models for Visual Robotic Manipulation
main
Active
Robotic manipulation;reinforcement learning;world model
reinforcement learning
3;5;5;6;6
5;2;4;3;3
2;3;2;3;3
2;2;2;4;3
3;3;3;3;3
5
3.4
2.6
2.6
3
-0.716115
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Additional questions beyond those already mentioned in the weakness sections. \n\nI like the idea of training the encoder / decoder using multiple viewpoints. Using a different scene along with a different camera pose to training view dependent aspects seems elegant. However, I'm not sure that this is really necessary. What is there to be encoded beyond the pose of the camera relative to the manipulator? If there's nothing else, is training the VDE overkill to extract that information from the scene? It'd be interesting to validate what specifically is learned by the VDE (e.g. could you train a readout network to extract relative camera pose?). Also, does this work equally well for more translation and rotation of the camera?\n\nWhy does the model performance drop so significantly in the BC results shown in E.2? I can imagine that this might be due to the reference framework used for the robot actions.\n\nA discussion in the context of 3D representations for keyframe-based manipulation would be helpful. Several works, such as [1,2], train a model that is independent of the camera viewpoint by generating actions in the camera frame of reference and then translating these into robot actions using a calibrated camera pose. Could the same idea be applied here?\n\n[1] Perceiver Actor: https://peract.github.io/\n[2] 3D Diffusor Actor: https://3d-diffuser-actor.github.io/" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The training setup to enforce a separation of view-independent and view-dependent encoders is clever and seems novel.\n\n- The view generation results indicate that the encoders and the decoder learn the kind of representation intended by the authors. \n\n- The evaluation of the learned policies indicates strong improvements over prior approaches to world model learning with respect to\ngeneralization to novel viewpoints." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The goal of this paper is to train world models for robot manipulation\nthat are robust to changes in the position of the camera observing the\nmanipulator and the environment. This is a very relevant problem,\nsince cameras frequently move between training sessions and\ndeployment. To achieve this, the authors introduce a training setup in\nwhich a combination of two VQ-VAE encoders is trained on a multi-view\ndataset; one of the encoders, the VIE, learns to encode the\nview-independent setting and arrangement of the scene, while the other\nencoder, VDE, learns to encode the view-dependent aspects of the\nscene. Taking the two encodings together, a decoder can reconstruct\nthe input scene as if it was observed from the viewpoint represented\nin the other scene.\n\nThe VIE is then used to train a world model that can be applied for\npolicy learning or behavior cloning. The results indicate that the\napproach is able to disentangle the different aspects of the view\ninputs, leading to improved generalization to viewing changes between\ntraining and testing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weakness of the paper lies in the experimental evaluation. As I understand, the model needs to learn (at least) two key aspects for a scenario: Where is the camera relative to the manipulator base (VDE) and what is the task-relevant layout of the scene (VIR). The decoder then needs to use that information to generate a novel viewpoint of the input scene. While this seems to work for the test cases, it is not clear whether the success is due to overfitting to a rather small number of settings and tasks, or whether the approach would scale to more complex scenarios, including real world data and camera pose changes beyond azimuth. The current evaluation does not provide sufficient evidence regarding real world significance.\n\nIn light of changing camera poses, the action space of the controller is very important. There are at least three I could imagine: (delta) joint space, (delta) end effector pose in camera frame of reference, (delta) end effector pose in manipulator frame of reference. The specific choice is extremely important for how a policy might transfer to different camera viewpoints. Which specific one was used in the experiments? Could the approach work for all of these? What if the camera calibration of a test scene is known?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Is the proposed algorithm's performance under viewpoint disturbances stable across different tasks, and does it remain consistent outside the simulation tasks involved in this paper?\n2. Why does the integration of Open X-Embodiment data into ReViWo lead to a decline in model performance under Camera Shaking (CSH) for the Door Open task? Additionally, why does the integration of the world model into ReViWo result in reduced performance under Camera Shaking (CSH) for the Drawer Open task?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The approach for learning viewpoint-invariant representations is quite novel. This paper employs a view-invariant encoder and a view-dependent encoder, which take two images from different viewpoints as input. The features encoded by these two branches are then processed through a decoder, utilizing a VAE-like learning objective to decompose view-invariant and view-dependent information.\n2. Both the comparative experiments and the ablation studies are thorough." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This study investigates robust robotic manipulation in the presence of camera viewpoints disturbances. It develops viewpoint-invariant representations learning methods with a VAE-like objective. The learned viewpoint-invariant representations are subsequently utilized for robotic control. The experimental results on two simulation environments demonstrate the enhanced robustness across two types of viewpoint disturbances." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The experiments are conducted on two simulation environments only, and the effectiveness of ReViWo on real-world robots remains unvalidated.\n2. From Figure 4, even when applying the proposed algorithm ReViWo in this paper, the success rate still significantly declines under disturbances caused by Camera Installation Position (CIP) and Camera Shaking (CSH). This indicates that the enhancement in robustness to viewpoint variations achieved by this method is limited.\n3. The representation of Figure 4 is quite misleading. It is recommended to revise it." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The values of the \\lambda_1 and \\lambda_2 coefficients for respectively the VQ-term and contrastive-term in the training objective (equation 3) are listed in Table 3 of the annex, but authors provide absolutely no clue on how these values were chosen, and do not analyze how critical these values might be for the result (my intuition is that they should have a significant impact). Could authors clarify that, and ideally provide at least a minimal quantitative analysis/assesment of how important it could be ?\n\nIn second paragraph of §4.2, authors write that their proposed method \"consistently surpasses the baselines\", but on figure 4 it can be seen that for task Window-Close, MVWM has much higher success rate (~85%) than ReWiVo on the CIP case, and slighltly higher succes rate on the CSH case. Authors should NOT \"over-claim\" in their text compared to figure, and should comment on this lesser performance of their method on this task.\nFurthermore, the presentation of figure 4 with \"skipping\" the 40%-80% part of the y axis is somewhat misleading: this must be corrected, possibly by using higher plots that do NOT skip the 40%-80% range.\n\nRegarding the use of Open X-embodiment data, ablation study reported table 1 shows it has a quite significant impact ; however, authors mention on line 227 that they \"introduce a weigthing factor in the loss calculation for these unlabeled data\", but it seems they provide no information whatsoever about what value is this weighting and how critical this value could be for the outcome. \nAlso, it appears in table 1 that inclusion of this extra data actually *degrade* result in the CSH case for Door-Open task ; author should comment on that.\n\nExamples of the decoder output shown in figure 7 are impressively similar to the ground truth, but are those examples on *test* data or on some of the training data ??\nFurthermode, since the success rate falls from over 90% on training data down to below 40% on test data of the 3 first tasks (a) (b) and (c) on figure 4, there must be a significant number of cases for which the view-invariance does not work so well --> authors should show and comment some of these failure cases, to allow readers to qualitatively evaluate what happens when results are not as good as in current figure 7...\n\nIn last part of §4.4, authors write that inclusion of their World Model (WM) \"present a consistent performance enhancement\", while actually in table 2 providing some ablation on the impact of their world model component, WM appears to slightly *degrade* result for CSH case of Drawer-Open task ; author should comment on that." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The problem addressed by the draft, view-invariance or at least viewpoint-robustness of learnt robotic manipulation policies, and more generally learning of view-invariant world models, is extremely important and challenging.\nThe solution proposed by authors for learning a view-invariant representation, which consists in learning a distangling of view-invariant encoding and view-dependant encoding, is appealingly elegant and seems rather original. It also has the interest as a by-product of enabling generation of an arbitrary (?) new view-viewpoint for a given system state, and conversely.\nThe experiments conducted on Meta-world and Panda-gym reported in the paper are rather convincing regarding the ability of the proposed approach to bring significant viewpoint-robustness (figures 4 and 5), and to learn a relatively view-invariant embedding (figure 6)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The draft proposes and evaluates a new approach for learning a view-invariant encoding and world model, in the context of learning vision-based robotic manipulation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weaknesses of the paper are the following:\n - while the \"baselines for comparison\" include MVWM, no comparison is conducted with other important very-related works mentioned: RT-X series works and RoboUniView\n- the experiments are conducted on a quite small number of tasks (3 out of 50 on Meta-World), and only one (!) in Panda-Gym ; this raises some doubt on possible \"cherry-picking\" approach for choosing these tasks" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. To my knowledge, the output of VAE is blurred. The multi-step world model will enlarge the blurring problem. Could you please how this problem affects this performance? Also, I will appreciate if the author could release the structure of VAE for reproductibilty.\n\n2. Also, will the vae have some errors? For example, will the generated object have different geometries?\n\n2. See weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "View variation is a very practical problem in robotic manipulation. This paper provides good experiments, and leverages Open X-Embodiedment, leveraging this dataset is a promising direction is robotics. Also, the manipulation video in the link is cool." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents ReViWo (Representation learning for View-invariant World model), a novel approach addressing the challenge of viewpoint changes in robotic manipulation tasks. Traditional methods struggle with performance degradation under varying camera angles; ReViWo overcomes this by leveraging multi-view data to learn robust representations for control under viewpoint disturbances. Using an autoencoder framework, ReViWo combines view-invariant and view-dependent representations, trained on both labeled multi-view simulator data and the Open X-Embodiment dataset (without view labels). Tested in Meta-world and PandaGym environments, ReViWo outperforms baseline methods, maintaining stable performance in scenarios with novel camera angles and frequent camera shaking. These results validate ReViWo’s effectiveness in providing robust, task-relevant representations across diverse viewpoints." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper shows application in simulation. However, in other specific real world scenarios, there is not enough multi-view data for training. So, I recommend authors to demonstrate how the study can work in the real world, and show real-world evaluation results.\n\n2. This paper only shows results in some simple tasks. Will this method work in more diverse tasks? For example, manipulating deformable objects like folding cloth. These experiments will improve the significance of this paper in robotics field." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "(see weakness)" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. It is beneficial to improve the robustness of a robotic policy to camera shaking or viewpoint changes. \n2. Finding low-dimensional state representation for high-dimensional visual signals (such as images), and applying existing offline RL methods on that state representation, is an interesting policy structure. \n3. The writing is easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a framework named as ReViWo to learn view-invariant representation (VIR), then uses the VIR as the low-dimensional state representation to train a policy with model-based offline RL (COMBO). \n\n\n1. ReViWo includes a view-invariant encoder and view-dependent encoder to reconstruct images at different viewpoints by combining VIR with view-dependent information. \n2. ReViWo was trained from multi-view simulation data and open-x datasets, and then evaluated on selected tasks on MetaWorld and PandaGym environments.\n3. The authors show that ReViWo is robust to viewpoint changes in evaluation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**1. (Fundamental Limitation)**\n\nRobotic tasks heavily depends on the understanding of multi-view camera data. For example, it is very common to have a static top camera and a moving on-gripper camera. The focus is on how to leverage information from different views in order to get better performance instead of only using the very limited invariant information. Therefore, by latching on view-invariant information, the proposed ReViWo is limited to only single-view RGB observation without depth, a highly constrained and often impractical setting in robotics (Note that if we have RGBD, we could reproject the point cloud to multiple views, so RGBD can be considered as multi-view). \n\nMoreover, suppose there is a desk in a single RGBD image, the camera movement can be viewed as the relative movement of the desk as well. This means that a view-invariant representation will lose some ability to sense object layout changes. This is really undesirable. \n\n**2. (Lack of Technical Contribution)**\n\nThe method section looks hand-wavy. The authors propose a view-invariant encoder and view-dependent encoder to reconstruct images and learn view-invariant representation. However, the authors did not provide any mathematical guarantee or at least intuition on why the representation can be disentangled that way. It is very likely that the two encoders just work in parallel without having the expected property. \n\nMoreover, the authors mentioned the training of a world model and training a reward model, but in fact, they used the COMBO method to do all these [1]. I am not sure whether COMBO, an offline model-based RL method, can be called a world model because it only predicts the next low-dimensional state. The COMBO framework includes the training of a reward model, so it is not the contribution of this method. \n\n**3. (Lack of Proper Evaluation)** \n\nThe authors only evaluate on MetaWorld and PandaGym, two very simple task suites in terms of manipulation diversity, precision, horizon length. Even on these simple task suites, the authors only select 3 tasks from MetaWorld and 1 task from PandaGym, while the MetaWorld has roughly 50 tasks. This evaluation is very insufficient. \n\n\n[1] COMBO: Conservative Offline Model-Based Policy Optimization. NeurIPs 2021" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We study robust robotic manipulation under viewpoint disturbance by learning view-invariant representation and world model." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning View-invariant World Models for Visual Robotic Manipulation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vJwjWyt4Ed},\nnote={under review}\n}" }, "abstract": { "value": "Robotic manipulation tasks often rely on visual inputs from cameras to perceive the environment. However, previous approaches still suffer from performance degradation when the camera’s viewpoint changes during manipulation. In this paper, we propose ReViWo Representation learning for View-invariant World model), leveraging multi-view data to learn robust representations for control under viewpoint disturbance. ReViWo utilizes an autoencoder framework to reconstruct target images by an architecture that combines view-invariant representation (VIR) and view-dependent representation. To train ReViWo, we collect multi-view data in simulators with known view labels, meanwhile, ReViWo is simutaneously trained on Open X-Embodiment datasets without view labels. The VIR is then used to train a world model on pre-collected manipulation data and a policy through interaction with the world model. We evaluate the effectiveness of ReViWo in various viewpoint disturbance scenarios, including control under novel camera positions and frequent camera shaking, using the Meta-world and PandaGym robotics environments. The results demonstrate that ReViWo maintains robust performance under viewpoint disturbance, while baseline methods suffer from significant performance degradation. Furthermore, we show that the VIR captures task-relevant state information and remains stable for observations from novel viewpoints, validating the efficacy of the ReViWo approach." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Robotic manipulation", "reinforcement learning", "world model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9a0b5d911560f4b013114ec8f60716c83e945dc2.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Learning View-invariant World Models for Visual Robotic Manipulation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vK8C37eHXM
Sample what you can't compress
main
Active
autoencoders_diffusion+generative models
unsupervised, self-supervised, semi-supervised, and supervised representation learning
1;3;3;3;6
5;5;4;4;5
2;2;3;2;3
1;2;1;2;3
2;2;2;2;3
3.2
4.6
2.4
1.8
2.2
0.102062
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What are the number of trainable parameters for the baseline GAN-based method and the proposed method? Is the baseline GAN autoencoder retrained on the same setting? It would be helpful to provide more details about the baseline method." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Diffusion loss for autoencoder training is an important direction to explore. This work is one of the first works that show promising results.\n2. The proposed method outperforms prior GAN-based autoencoder on ImageNet with common metrics CMMD and FID. The trend of compression ratio also shows the advantage of the diffusion loss.\n3. The variance visualization is interesting and provides more insights about the what information are sampled." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work adds a diffusion loss to the autoencoder training and shows better reconstructoin quality than the prior GAN-based autoencoder. Specifically, the proposed method first has a coarse reconstruction, that is supervised by MSE and LPIPS loss, then a diffusion model refines the coarse reconstruction (jointly trained). The authors show improvement for both reconstruction in different compression rates and generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The 2-stage pipeline makes the propose method a bit less compelling. The autoencoder is still supervised by MSE + LPIPS loss while the diffusion loss is more like for refinement (although it is joint training).\n2. The authors claim that the coarse reconstruction is just for speeding-up the training as the diffusion decoder \"should converge to true distribution\" (L209), I did not find in the paper for neither: (i) experiments that show without LPIPS loss, the model converges to similar performance; (ii) a theory that guarantees autoencoder with only diffusion loss learns similar representation as LPIPS autoencoder. The theory of diffusion models guarantees p(x|z) can be correctly modeled, but LPIPS should have impact on the latent representation z and thus may change the distribution p(x|z). Thus the claim is not well-justified to me.\n3. It would be better to have more qualitative visual comparisons. For example, for different compression rates, and with more diverse samples, to justify the improvement of the proposed method. FID may be overly-related to the LPIPS weight." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. It might be important to mention “DiffuseVAE: Efficient, Controllable and High-Fidelity Generation from Low-Dimensional Latents” paper? Could you explain the main difference between this work and your approach? Is it possible to compare your model to DiffuseVae?\n\n2. Is it necessary to train the UNet? Is it possible to use a pre-trained diffusion model without additional training in the refinement part?\n\n3. Nowadays it is important to be able to work with high-resolution images. Can you scale your method to produce high-resolution images?\n\n4. The suggested decoder of the model consists of two main parts, including the additional UNet. Is it possible that your method provides better results because it utilizes a larger model?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The authors provide full explanation of technical detail, including all architecture details and training process. \n\n2. Wide and well-explained ablation studies was conducted to explore the importance of various components of the model. The experiments devoted to exploration of number of denoising steps and cfg sclaes helps to understand the importance of correct choice of these parameters for quality improvements." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a diffusion-based loss for improving the quality of VAE modeling and reconstruction. The authors proposed the new VAE decoder that consists of two main parts, including a Diffusion UNet. The training was conducted with additional diffusion loss. The proposed model was compared with GAN-based loss methods and the authors demonstrate that proposed method yields better results, especially at higher compression rates. Additionally the authors emphasize the importance if the decoder's stochasticity for better details in generated samples." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.There is lack of sufficient metrics for evaluation. It would be better to provide some editional metrics calculation such as LPIPS (https://richzhang.github.io/PerceptualSimilarity/) or Inception Score (https://arxiv.org/abs/1606.03498).\n2. Furthermore the authors provided limited evaluation datasets and comparison models. Additional comparisons with some state-of-the-art methods, such as DC-VAE (https://arxiv.org/pdf/2011.10063) or VAEBM (https://arxiv.org/abs/2010.00654) on some other datasets, such as LSUN (https://arxiv.org/abs/1506.03365) or CelebA-HQ-256 , would provide a better understanding of the quality of the model.\n3. Only low-resolution data. Conducting further experiments with higher resolution images would be beneficial to understand the capabilities of the model." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Overall, the method is trivial and cannot match the high standard of ICLR.\n- This is more like an unfinished paper.\n- The footnote leaks part of the author information." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Adding an additional U-Net and using diffusion model to improve the performance is promising, it should be able to get better results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propose to use a U-Net to further refine the results of the traditional discriminative autoencoders to alleviate the blury output and get crisp and high quality results. Experiments show that the method does improve the visual appearance of the reconstructed images." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- What's the difference between the proposed method and those utilizing diffusion models for super resolution? I cannot see a big difference. \n- Yet the training method is not well explained, do you need to train $E$ and $D_{Initial}$? Or just $D_{refine}$ is trained?\n- In Sec. 3.1, it is said that the impact of $D_{Initial}$ is investigated in Table 1, which is not.\n- When displaying the reconstruction results, the raw inputs are necessary to see the difference.\n- Besides the figures, more quantitative metrics are necessary to justify the method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- What does \"favorable Hessian\" mean? It doesn't seem to relate to MSE.\n- How does this approach differ from fine-tuning a standard VAE and using diffusion for upscaling?\n- How does \"speed up training\" be versify?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper provides extensive detail on the architectural design of SWYCC, including the detail structure of the encoder and decoder. However, while this information is valuable for understanding the framework, it is not the primary contribution of the paper and does not introduce novel architectural concepts.\n \n- The inclusion of a classifier-free guidance experiment is a notable strength, as this approach introduces a new aspect to the diffusion-based decoder. However, it is important to note that the classifier-free technique employed significantly increases computation time during the sampling stage, effectively doubling it, which also poses challenges in practical applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents SWYCC, a VAE trained with diffusion loss. It optimizes performance by adjusting loss weights and sampling steps. The authors then compare it to VQGAN-based methods, demonstrating improved image reconstruction." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper includes too few image comparisons. It does not show many comparisons between different model compressions, making it difficult to evaluate how SWYCC stands relative to other approaches.\n \n- There is no comparison with Stable Diffusion XL's VAE, which is one of the widely used VAE models.\n \n- The method relies solely on matrices for image generation, which is not common in image reconstruction compression. While it is acceptable for the authors to report measurements they find appropriate, it is also important to include other widely used metrics in the field, such as PSNR, LPIPS, and rFID.\n \n- The paper lacks mention and discussion of previous diffusion-based autoencoders, such as DiffusionAE ([link](https://diff-ae.github.io/)) and DiVAE ([link](https://arxiv.org/abs/2206.00386)). Care should be taken not to claim to be the first when there are multiple works prior to this.\n \n- Training the decoder under diffusion loss is a concept introduced in 2022. Although the idea of using diffusion as a decoder has been recognized for its benefits, it has not gained popularity compared to GAN-based training primarily due to its higher computational cost. This concern is not adequately addressed in the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Can the authors please explain the differences between the proposed approach and [1]?\n2. Can the authors explain why there is no comparison with additional methods, and in particular why there is no comparison on the rate-perception-distortion plane [2]?\n\n\n[1] Konpat Preechakul et al., \"Diffusion Autoencoders: Toward a Meaningful and Decodable Representation\", CVPR 2022.\n\n[2] Yochai Blau and Tomer Michaeli, \"Rethinking Lossy Compression: The Rate-Distortion-Perception Tradeoff\", ICML 2021" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The proposed approach for image compression is simple and effective. It makes sense that it should work better than GAN based methods, as diffusion models beat GANs in many different applications. GANs are indeed incredibly difficult to train.\n2. The paper is overall clear and written well.\n3. There are several ablation studies." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes to use a diffusion model for image compression. Specifically, the diffusion model is an encoder-decoder architecture, where the encoder encodes the given clean image, and the decoder generates a high-quality image from pure noise (as in standard diffusion methods) while being conditioned on the encoded image. The encoder and the decoder are trained jointly using the standard MSE diffusion loss, together with some tweaks such as adding a perceptual loss after an intermediate decoding step." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. As far as I understand, the proposed approach is not novel. See [1] for example. The only differences that I see are some optimization/loss tweaks. If the authors could clarify the differences and why their approach is novel, that would be great. Currently, [1] is not discussed in the manuscript.\n\n2. A comparison with several previous methods is missing, specifically a comparison on the rate-perception-distortion plane [2]. When designing compression methods that only aim for minimal distortion (e.g., MSE), we usually compare them on the rate-distortion plane, as the authors did in figure 8. However, when designing high-perceptual-quality compression methods, we usually compare them on the rate-perception-distortion plane, as these three desired forces are at odds with each other [2].\n\n3. The authors only demonstrate their method on ImageNet. But what about simpler data sets, such as CelebA,CIFAR, etc.? Is the proposed approach still more effective than GANs?\n\n4. The limitations section is very limited. The authors discuss only the limitation of almost all diffusion methods: requiring a large number of inference steps to produce high-quality images. What about model size, training time, required data set size, etc., as compared to GAN based methods?\n\n[1] Konpat Preechakul et al., \"Diffusion Autoencoders: Toward a Meaningful and Decodable Representation\", CVPR 2022.\n\n[2] Yochai Blau and Tomer Michaeli, \"Rethinking Lossy Compression: The Rate-Distortion-Perception Tradeoff\", ICML 2021" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We replace GAN loss with a diffusion loss while training autoencoders and show that autoencoder produces less distortion while being better for generation." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024sample,\ntitle={Sample what you can't compress},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vK8C37eHXM},\nnote={under review}\n}" }, "abstract": { "value": "For learned image representations, basic autoencoders often produce blurry results. Reconstruction quality can be improved by incorporating additional penalties such as adversarial (GAN) and perceptual losses. Arguably, these approaches lack a principled interpretation. Concurrently, in generative settings diffusion has demonstrated a remarkable ability to create crisp, high quality results and has solid theoretical underpinnings (from variational inference to direct study as the Fisher Divergence). Our work combines autoencoder representation learning with diffusion and is, to our knowledge, the first to demonstrate the efficacy of jointly learning a continuous encoder and decoder under a diffusion-based loss. We demonstrate that this approach yields better reconstruction quality as compared to GAN-based autoencoders while being easier to tune. We also show that the resulting representation is easier to model with a latent diffusion model as compared to the representation obtained from a state-of-the-art GAN-based loss. Since our decoder is stochastic, it can generate details not encoded in the otherwise deterministic latent representation; we therefore name our approach \"Sample what you can't compress\", or SWYCC for short." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "autoencoders_diffusion+generative models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2df34d221a508cbd161f24f41ea1fdc386570b3b.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Sample what you can't compress" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vKG270UOg4
BDC-Occ: Binarized Deep Convolution Unit For Binarized Occupancy Network
main
Active
3D occupancy prediction; binarized networks
applications to robotics, autonomy, planning
3;5;5;5
3;2;3;4
3;3;3;2
3;2;3;2
3;3;2;3
4.5
3
2.75
2.5
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Why are so many versions of the main block are needed?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- It shows as main strength the theoretical insights that 1x1 binarized convolution is more robust to binarization and is thus used in to make the network deeper. Furthermore, it introduces an additional branch within the network to further refine the per-channel output of each layer based on this observation.\n- The results indicate an improvement over other binarization methods in terms of IoU and mAP.\n- Ablation studies are performed to show the impact of the various proposed changes." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a method for binarizing convolution operations in binary occupancy networks. It first analyzes the impact of binarization theoretically, and then proposes a mitigation approach based on binarized convolution unit that enhances performance. It provides results based on two benchmarks and comparisons with other networks and binarization methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In terms comparing the FPS with FlashOCC, one of the main objectives of binarization is to provide FPS speedups, since the provided results are marginally improved at best it seems to make the proposed approach less necessary. Instead FPS should be compared with other binarization methods as well for fairness. \n- From the current writeup it is not clear how the proposed module can be plugged into other existing methods.\n- I am unconvinced that since in my understanding the backbone is left as FP32 why binarization is necessary and other forms of quantization are not considered appropriate.\n- It is not clear what aspects of the approach are specific to the occupancy task and what is generalizable to other tasks as well." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Here are some questions and suggestions for the authors:\n\n1. **Alternative Binarization Techniques**: The paper focuses on BiSR-Conv for binarization. Could the authors provide a comparison with other established techniques like XNOR-Net or DoReFa-Net? This would help understand if the performance gains stem specifically from the BDC unit or are also achievable with other carefully tuned binarization methods. Furthermore, exploring ternary or higher bit-width quantization could provide a more nuanced understanding of the trade-off between accuracy and efficiency.\n\n2. **[minor] Comparison with Other Compression Methods**: Beyond BNNs, how does BDC-Occ compare to other model compression techniques like pruning, quantization, and knowledge distillation in the context of 3D occupancy prediction? Providing quantitative results or discussion comparing these methods would strengthen the argument for BDC-Occ's practical value.\n\n3. **Detailed Analysis of Per-Channel Refinement Branch**: The ablation study provides some insights, but a more in-depth analysis of the per-channel refinement branch is needed. How sensitive is its performance to the number of layers in `MulBiconv` (N)? Are there alternative architectures for this branch that could further improve cross-channel feature learning? A visualization of the learned channel weights might also offer insightful qualitative analysis.\n\n4. **[minor] Generalizability to Transformers**: The stated limitation to CNN architectures raises questions about BDC's broader applicability. Have the authors explored applying BDC, or a modified version, to Transformer-based occupancy networks? Even negative results in this direction would be valuable for understanding the challenges and potential future work." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Originality: The paper demonstrates originality in its identification and solution for the performance degradation problem in binarized 3D occupancy networks. While BNNs have been explored in other domains, applying them effectively to 3D occupancy prediction, especially with a focus on maintaining performance with increasing network depth, is novel. The proposed BDC unit, with its 1x1 kernel constraint and the per-channel refinement branch, presents a creative combination of techniques tailored to address the specific challenges of binarization in this context.\n\nQuality: The technical quality of the paper is good. The authors provide theoretical justification for their design choices and conduct thorough experiments to validate the effectiveness of the BDC unit. The results convincingly demonstrate the superiority of BDC-Occ over other state-of-the-art binarized methods and its competitiveness with full-precision models. The ablation studies further strengthen the claims by showcasing the individual contributions of each component of the BDC unit.\n\nClarity: While the core ideas are presented clearly, the clarity of the paper could benefit from some improvements. The mathematical proofs are clear. Additionally, visually presenting the overall architecture of BDC-Occ aid understanding.\n\nSignificance: The significance of the work lies in its potential to enable deployment of accurate 3D occupancy prediction on edge devices. The substantial reduction in computational cost achieved by BDC-Occ, without sacrificing accuracy, is notable. It would be nice to discuss the roadmap to apply on transformer, to inspire further researches." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces BDC-Occ, a novel binarized neural network (BNN) for 3D occupancy prediction. Recognizing the computational challenges of deploying existing 3D occupancy networks on edge devices, the authors leverage BNNs for model compression. However, they note that simply binarizing existing models leads to performance degradation, particularly when increasing network depth.\n\nThe paper's core contribution is the Binarized Deep Convolution (BDC) unit. It addresses the identified limitations of binarized convolutions through two key innovations. First, additional convolutional kernels within the BDC are constrained to 1x1 to minimize the impact of binarization errors as network depth increases. Second, a per-channel refinement branch reweights outputs using a first-order approximation, improving the capture of cross-channel feature importance.\n\nThrough extensive experiments on the Occ3D-nuScenes dataset, the authors demonstrate that BDC-Occ achieves state-of-the-art performance among BNNs, even rivaling full-precision models in mIoU while significantly reducing parameters and operations. They further validate the generalizability of the BDC unit by showing its effectiveness in 3D object detection tasks on the nuScenes dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Limited exploration of binarization strategies**: The paper primarily focuses on binarizing convolutional layers using the BiSR-Conv method. Exploring alternative binarization techniques, such as XNOR-Net [1] or DoReFa-Net [2], and comparing their performance with BDC-Occ would strengthen the analysis and potentially reveal further insights. It's also important to investigate the impact of different activation functions specifically designed for BNNs, like those proposed in ReactNet [3].\n\n2. **[minor] Lack of comparison with other compression techniques**: The paper positions BDC-Occ as a solution for deploying occupancy networks on edge devices. However, it lacks a comparison with other model compression methods beyond binarization, such as pruning [4], quantization [5], or knowledge distillation [6]. Demonstrating the advantages of BDC-Occ over these alternatives would significantly bolster its practical significance.\n\n3. **[minor] Limited generalizability**: The paper acknowledges the limitation to CNN architectures. However, this limitation needs further discussion and investigation. This would help understand the challenges and potentially open avenues for future research on binarizing more diverse network architectures.\n\n\n**References:**\n\n[1] Hubara et al., Binarized Neural Networks, NIPS 2016.\n\n[2] Zhou et al., DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients, arXiv 2016.\n\n[3] Liu et al., Reactnet: Towards precise binary neural network with generalized activation functions, ECCV 2020.\n\n[4] Han et al., Learning both Weights and Connections for Efficient Neural Networks, NIPS 2015.\n\n[5] Jacob et al., Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference, CVPR 2018.\n\n[6] Hinton et al., Distilling the Knowledge in a Neural Network, arXiv 2015." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethics review needed." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In the paper, binarization is adopted for deployment on edge devices. Is there any experiment related to this?\n2. The FPS in Table 3 seems to be a small improvement compared to FlashOcc, which is a floating point model. Is there any experimental results of FPS and run time of BDC-B?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper is the first study to apply binarization to the task of 3D occupancy prediction. The method of the authors significantly reduces the computational cost while maintaining the performance.\n2. The proposed BDC unit significantly improves the performance of the network through theoretical analysis." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "BDC-Occ proposes a BDC unit that applies binarization to 3D occupancy prediction tasks. To alleviate binarization errors, they use a 1x1 binarized convolution and design a BDC unit based on it. The authors show that their method reduces the performance gap with floating-point models and significantly reduces hardware resources." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. what is the contribution of BDC-V2 in Figure 3? It seems that it only increases the computational cost, with no performance improvement. Furthermore, MultiBiconv seems to be a multiple iteration of the technique from BDC-V1.\n2. The paper seems to propose the design of a binarization methodology, not a design for Occupancy Prediction. While this is the first application of binarization to occupancy prediction, other binarization methodologies seem to be easily adaptable. \n3. The design in Section 3.4 seems to be very similar to that of BiSRNet, which may limit the contribution of the paper. Is there anything different about the design of BiSRNet?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- The paper would benefit from a clearer description of the hardware platform used and the specific implementation strategy for 1-bit quantized convolution. Key questions remain unanswered, such as whether the quantized operations are run on a specialized computing framework like TensorRT, which is widely used for deploying optimized deep learning inference on edge and server devices. Additionally, details on how 1-bit quantized convolution is implemented—such as the use of custom kernels, acceleration libraries, or optimizations for reduced precision—would provide readers with a clearer understanding of the practical feasibility and performance considerations of this approach.\n\n- If the work relies on standard hardware (e.g., CPUs or GPUs) without the assistance of dedicated quantization frameworks, this would impact performance significantly compared to using more specialized hardware like TPUs or FPGAs, which are often better suited for extreme quantization levels. Specifying these factors is crucial for evaluating the performance claims, as well as the actual benefit of 1-bit quantization in terms of speed and efficiency." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Occupancy estimation plays a critical role in real-time applications like autonomous driving, where precise environment mapping is essential for safe and efficient vehicle navigation. However, the computational demands of occupancy estimation, especially when deployed on device-side platforms like Orin GPUs, create significant challenges in terms of efficiency and resource constraints. This research addresses these challenges by exploring strategies to minimize computational overhead, a vital area of study given the increasing emphasis on edge computing in autonomous systems. By carefully engineering these binarization blocks, the proposed approach effectively mitigates quantization loss, thus maintaining model accuracy while reducing resource consumption." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "To reduce parameters and computational costs, this paper proposes a binarized occupancy network tailored specifically for CNN-based occupancy networks. The approach introduces two key techniques: first, an additional 1x1 binarized convolution layer is added to increase network depth, thereby enhancing feature extraction while maintaining efficiency. Second, a per-channel refinement branch is incorporated to reduce quantization error, improving the model’s precision despite the constraints of binarization. These enhancements aim to optimize the balance between performance and resource efficiency in the proposed binarized network." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "# Major Concern:\nWhile this work serves as an early exploration of binary quantization for the occupancy prediction task, several significant limitations are apparent. The study does present two main findings in the context of binary occupancy networks:\n- 1×1 binarized convolution introduces only minimal binarization errors as network depth increases.\n- Binarized convolution is notably less effective than full-precision convolution at capturing cross-channel feature importance.\n\nHowever, the proposed strategies to address these issues—namely, enhancing network depth through additional 1x1 binarized layers and adding a per-channel refinement branch—have already been extensively explored in the quantization literature. This repetition limits the novelty of the work, and the paper does not provide fresh insights into the specific task of occupancy prediction. The binary quantization methods introduced here, while perhaps incrementally improving upon existing binary quantization techniques, do not directly tackle the occupancy task itself. Instead, they represent a basic application of binary quantization to occupancy, yielding largely expected results. As such, I believe this work falls short of the standards expected at a top-tier machine learning conference, and the main track may not be the optimal venue for this submission.\n\n# Minor Comments:\n- This study restricts its exploration to CNN-based approaches for occupancy tasks, without incorporating transformer-based occupancy networks. Transformer models have gained traction and demonstrated superior performance in related fields, and their absence here limits the study’s relevance to current research trends.\n- The paper outlines a manually crafted binary network architecture aimed at mitigating quantization-induced errors. However, quantizing both weights and activations from 32-bit to 1-bit introduces significant quantization errors. Without innovations specifically targeted at reducing such errors in this context, the proposed method struggles to demonstrate substantial improvements.\n- The speed advantage observed (6.22 FPS versus 7.53 FPS) is marginal, casting doubt on the practical benefits and motivation of the approach. If the goal is to address computational cost and memory constraints in existing occupancy networks, a rigorous analysis of actual speedup and memory reduction should be provided. Alternatively, if the study’s focus is to explore novel quantization techniques for this task, one would expect groundbreaking insights tailored to occupancy prediction. Given the current submission, both aspects seem underdeveloped." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024bdcocc,\ntitle={{BDC}-Occ: Binarized Deep Convolution Unit For Binarized Occupancy Network},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vKG270UOg4},\nnote={under review}\n}" }, "abstract": { "value": "Existing 3D occupancy networks demand significant hardware resources, hindering the deployment of edge devices. Binarized Neural Networks (BNNs) offer a potential solution by substantially reducing computational and memory requirements. However, their performances decrease notably compared to full-precision networks. In addition, it is challenging to enhance the performance of the binarized model by increasing the number of binarized convolutional layers, which limits its practicability for 3D occupancy prediction. This paper presents two original insights into binarized convolution, substantiated with theoretical proofs: (a) $1\\times1$ binarized convolution introduces minimal binarization errors as the network deepens, and (b) binarized convolution is inferior to full-precision convolution in capturing cross-channel feature importance. Building on the above insights, we propose a novel binarized deep convolution (BDC) unit that significantly enhances performance, even when the number of binarized convolutional layers increases. Specifically, in the BDC unit, additional binarized convolutional kernels are constrained to $1\\times1$ to minimize the effects of binarization errors. Further, we propose a per-channel refinement branch to reweight the output via first-order approximation. Then, we partition the 3D occupancy networks into four convolutional modules, using the proposed BDC unit to binarize them. The proposed BDC unit minimizes binarization errors and improves perceptual capability while significantly boosting computational efficiency, meeting the stringent requirements for accuracy and speed in occupancy prediction. Extensive quantitative and qualitative experiments validate that the proposed BDC unit supports state-of-the-art precision in occupancy prediction and object detection tasks with substantially reduced parameters and operations. Code is provided in the supplementary material and will be open-sourced upon review." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "3D occupancy prediction; binarized networks" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1a638b2c5da41ed101f47fa9ce6f0f23e65f2b65.pdf" }, "presentation": null, "primary_area": { "value": "applications to robotics, autonomy, planning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/fe37ec825ab3ef7c3d44e7e997f31eae9aa11e21.zip" }, "title": { "value": "BDC-Occ: Binarized Deep Convolution Unit For Binarized Occupancy Network" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vKJ8YH0iNp
MGD$^3$: Mode-Guided Dataset Distillation using Diffusion Models
main
Active
Dataset Distillation; Dataset Condensation; Diffusion;
generative models
3;3;5;8
4;4;4;4
2;2;2;3
2;1;3;3
2;2;3;3
4.75
4
2.25
2.25
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "As stated in weakness part." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The authors consider the soft-label protocol and hard-label protocol, which are important for fair comparison in dataset distillation. We can see the difference between two the protocols in Table 10. \n\n2. The performance improvements reported in this paper are significant." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed a generative prior methods called MGD for dataset distillation tasks. This paper follows the idea of the baseline MiniMaxDiff Gu et al. (2024) to enhance the diversity of the generated synthetic data. MGD influences the generation process by leveraging DiT models without requiring re-training or fine-tuning. Specifically, the authors propove to utilize the so-called mode to regularize the generation of synthetic data. However, the formulation for calculating these modes is not clearly explained. Based on Figure 2, my understanding is that the modes are the cluster centers of the original dataset. Additionally, the paper lacks theoretical proof to support the effectiveness of the proposed mode-guided approach. The demonstration in Figure 1 selects only four data instances to illustrate diversity improvement, which seems highly subjective." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. It is good that the authors clarify the the soft-label protocol and hard-label protocol. However, what protocol is used in Table 1, 2, 3? If the soft-label protocol is used, what are the parameters of valiation epochs, teacher networks, and the data augmentation methods? \n\n2. I do not see a clear formulation explaining the derivation of the modes. Only Section 3.2 (lines 307 to 319) discusses the application of modes. My main concern is with the method used to select the modes; specifically, specially designed parameters for generating these modes could significantly impact the final performance.\n\n3. Based on Figure 2, my understanding is that the modes are the cluster centres of the original dataset. This approach is very close to the Dream method. Please tell more details about the difference between the two methods. \n\n\n4. The abstract could be more concise. For example, the limitation mentioned from lines 12 to 20 can be condensed into two sentences. The description of your proposed method could be more generalized. Additionally, the caption of Figure 1, and the related works could be more concise as well. \n\n5. The authors should emphasize more about the derivation, and formulation on the modes. However, most of the introduction and related work are telling the basic information of the previous methods. I cannot find a strong motivation or intuition to develop the mode-guided approach. Also, i doubt the performance enhancement is highly affected by the parameters in obtaining the modes. \n\n6. The random method achieves a very high performance as stated in Table 1. What if the set is constructed by dataset pruning method, such as [2], and is evaluated by the soft-label protocols? \n\n7. Although the authors list five contributions in this paper, points 1, 3, 4, and 5 seem redundant, essentially representing the same idea. Additionally, point 2 merely reiterates a widely acknowledged finding from previous works—that diversity is beneficial.\n\n[1] Liu, Yanqing, et al. \"Dream: Efficient dataset distillation by representative matching.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\n\n[2] He, M., Yang, S., Huang, T., & Zhao, B. (2024). Large-scale dataset pruning with dynamic uncertainty. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7713-7722)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "No." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.\tThe idea of mode guidance for target-distribution synthesis is reasonable and easy to implement. \n\n2.\tExtensive experiments and study have been provided." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper adapts the pretrained diffusion model for dataset distillation. Specifically, with a pretrained diffusion model, the authors introduce mode guidance in the early denoising stages. The modes for guidance are discovered from the target data distribution with VAE encoder. Experiments show that the new method outperforms some diffusion model baselines and dataset distillation methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe writing can be improved, especially the logic and causality in introduction. The citation format is unsuitable. Some words/phrases have inconsistent capitalization. \n2.\tSome statements are not rigorous:\na)\tThe authors claim “… diffusion models … do not suffer from mode collapse” in introduction. Is it theoretically or empirically proved in previous work? It also has conflicts with the claims in other paragraphs. \nb)\t“We validate this by addressing the following question: Given a pre-trained diffusion model, can a distilled dataset be extracted from this model, as it has learned the data distribution?” As claimed in the abstract, the diffusion model is pre-trained by others on some datasets. It is not guaranteed that the diffusion model “has learned the data distribution” of “a distilled dataset”. \n3.\tThe listed five “contributions” are mostly repeated and trivial. \n4.\tAccording to Table 1, the new method is better than some diffusion baselines, while obviously worse than the classic dataset distillation methods, e.g., DM 2023. \n5.\tThough images from target distribution are synthesized by diffusion model, there is no connection to dataset distillation, in which data/knowledge should be distilled/condensed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Could you go beyond the traditional framework of diffusion model-based image generation and further explain the relationship between your method and dataset distillation?\n\n- I’m curious about the resource consumption of $MGD^3$, such as runtime and memory usage. Could you provide more details on this?\n\n- How do you ensure that the modes obtained using the K-means clustering algorithm are meaningful in the context of dataset distillation? Have you tried other algorithms for mode discovery in each class?\n\n- Given the increase in sample diversity, I would like to see a comparison of cross-architecture evaluations with other diffusion + DD methods. Could you include more network architectures? The paper only uses three networks for evaluation.\n\n\nIf you can address all or most of my questions, I would consider giving a higher score." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Ensuring dataset sample diversity has long been a challenge in the field of dataset distillation. This paper addresses this by employing mode guidance to generate as diverse samples as possible for each class, minimizing redundancy and significantly enhancing intra-class diversity in the generated dataset.\n\n- The paper utilizes pre-trained diffusion models to generate datasets without the need for additional fine-tuning, relying only on guidance during the denoising process, which simplifies the approach and improves efficiency.\n\n- The stop guidance mechanism strikes a balance between sample diversity and quality, ensuring diverse samples while maintaining high quality, and preventing potential negative effects from excessive guidance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a mode-guided diffusion model that generates synthetic datasets using pre-trained models without the need for fine-tuning. The method operates through three key stages: mode discovery, mode guidance, and stop guidance. These stages ensure both enhanced data diversity and high sample quality. Furthermore, the approach is versatile, making it applicable to various diffusion models. Experimental results demonstrate that the proposed method surpasses existing techniques across multiple datasets, effectively addressing the issue of limited mode diversity in generated data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The mechanisms of mode and stop guidance are clear but lack strong theoretical support.\n\n- The method is easy to understand but could be optimized, such as by automating the adjustment of $t_{SG}$ and improving the mode discovery algorithm.\n\n- The approach is still fundamentally about image generation via diffusion models, with insufficient exploration of its contribution to dataset distillation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can you please tell if the results of CIFAR10/100 and tinyImageNet have also improved?\n2. Have you tried other cluster methods in experiments to check if they influence performance?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The quantitative results on ImageNet-1k and its subsets show improvement across all tables compared to both the baseline and state-of-the-art methods at various ipc. \n2. The pretrained diffusion model generates distinct samples that ensure intra-class diversity with the help of the guidance signal in reverse process.\n3. Training time is reduced as the pretrained diffusion model does not require fine-tuning." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a mode guidance method in the reverse process of a pretrained diffusion model to enhance the diversity of synthetic images generated for dataset distillation task. They calculate the mode by kmeans clustering on the original dataset, then calculate the mode guidance score which is added to the noise function at appropriate time steps during the reverse process of the diffusion model. This approach achieves both the representativeness and diversity in the synthetic images." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The image quality, the information loss and the recovery of data distribution rely on the diffusion model heavily.\n2. The mode simply calculated by kmeans may represent the original data distribution insufficiently and the author does not compare it with other cluster methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024mgd,\ntitle={{MGD}\\${\\textasciicircum}3\\$: Mode-Guided Dataset Distillation using Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vKJ8YH0iNp},\nnote={under review}\n}" }, "abstract": { "value": "Dataset distillation aims to synthesize a smaller training set from a large dataset such that a model trained on this distilled set performs comparably to one trained on the entire dataset. For image classification, earlier methods proposed optimization strategies in the input space to synthesize a distilled dataset, but they are computationally expensive and difficult to scale to higher resolutions. Also, the datasets synthesized by these methods lack intra-class diversity as they ignore the modes of the data distribution. Recent works propose using generative models, among which diffusion models have shown promising results as they are known to capture the data distribution effectively. However, diffusion models tend to over-sample from the prominent modes of the data distribution, resulting in limited diversity in the generated samples. To address these limitations in this work, we propose a mode-guided diffusion model. Unlike existing works that fine-tune the diffusion models for dataset distillation, we propose to use a pre-trained model without the need for fine-tuning. Our novel approach consists of three stages: Mode Discovery, Mode Guidance, and Stop Guidance. In the first stage, we discover distinct modes in the data distribution of a class to build a representative set. In the second stage, we use a pre-trained diffusion model and guide the diffusion process toward the discovered modes to generate distinct samples, ensuring intra-class diversity. However, mode-guided sampling can introduce artifacts in the synthetic sample, which affect the performance. To control the fidelity of the synthetic dataset, we introduce the stop guidance. We evaluate our method on multiple benchmark datasets, including ImageNette, ImageIDC, ImageNet-100, and ImageNet-1K; Our method improved $4.4$%, $2.9$%, $1.6$%, and $1.6$% over the current state-of-the-art on the respective datasets. In addition, our method does not require retraining of the diffusion model, which leads to reduced computational requirements. \nWe also demonstrate that our approach is effective with general-purpose diffusion models such as Text-to-Image Stable Diffusion, eliminating the need for a pre-trained model in the target dataset." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Dataset Distillation; Dataset Condensation; Diffusion;" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2e1783b87ddd703a83c1ec0a46585504ebb69483.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MGD$^3$: Mode-Guided Dataset Distillation using Diffusion Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vKL1i2p5Xr
Text as Any-Modality for Zero-shot Classification by Consistent Prompt Tuning
main
Active
Multimodal Learning ; Prompt Learning; Zero-shot Classification;
unsupervised, self-supervised, semi-supervised, and supervised representation learning
5;5;5;5
4;3;4;3
3;1;3;2
3;2;3;2
3;2;2;2
5
3.5
2.25
2.5
2.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "None" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper focuses on representation in a multimodal setting, which is a interesting and important fields. This paper also utilize important tools of contrastive loss for inter-model learning and ranking loss for intra-model learning.\n2. The training process is efficient, relying only on a prompt pool as the training parameter and using pretrained models like CLIP, CLAP, and LLM to eliminate the need for complex data collection.\n3. The experiences include both objective metrics and visual figures to help understand the effectiveness of the new methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on representation in a multimodal setting, introducing a method to build class-specific prompts across three types of inputs: image, video, and audio. The training process is efficient, relying only on a prompt pool as the training parameter and using pretrained models like CLIP, CLAP, and LLM to eliminate the need for complex data collection. However, the experiences discussed in this paper are limited to relatively simple classification problems involving images, videos, and audio. This hinders my ability to fully understand the potential of these methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main issue lies in the novelty and practical functionality for this fields:\n1. The contrastive loss and ranking loss are not new; they have been applied to multimodal representation learning [1, 2] for some time.\n2. The application of this paper is currently limited to relatively simple classification tasks, for which many existing tools perform well. I encourage the authors to include additional tasks, such as conditional image or audio generation using the novaly class.\n3. The prompt pool is limited at the beginning, so what if the user wants to add a new concept? Although the pipeline can introduce new concepts, the issue is how quickly this process can be compared to the overall training of the method. I encourage the authors to provide more empirical results or theoretical analysis on the time complexity of adding new concepts in comparison to initial training time.\n\n[1] Wang Z, Zhao Y, Huang H, et al. Connecting multi-modal contrastive representations[J]. Advances in Neural Information Processing Systems, 2023, 36: 22099-22114.\n\n[2] Zheng L, Jing B, Li Z, et al. Heterogeneous contrastive learning for foundation models and beyond[C]//Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024: 6666-6676." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See details in Weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The authors present a simplistic approach to prompt building and describe the build process in detail.\n- The authors propose a uni-directional contrastive loss to facilitate intermodal training.\n- TaAM-CPT effectively integrates the image/audio/video modalities and achieves competitive performance on classification tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors explore a generic representation model capable of scaling to an infinite number of modalities without the need for any modality-specific labelled data.TaAM-CPT simplifies the design by characterising any modal category as a randomly initialised vector, and exploits the command tracking capabilities of the LL.M. to allow easy access to textual training data of any category. TaAM-CPT ensures the flexibility to add any category from any modality without retraining the already learned category-specific cues. Additionally, the authors designed a one-way contrast loss, which uses modalities with stronger representational capabilities to guide the learning of those that are weaker." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Making prompts requires inserting **{Label}**, does this mean that different pools of prompts need to be designed for different datasets?\n- LLM's hallucinations may affect the quality of prompt generation, does TaAM-CPT have a process for quality checking during prompt production?\n- TaAM-CPT needs to adjust Inter-modal Learning based on validation performance, but I noticed that some of the dataset's validation set is being used for evaluation, is there an information leak?\n- Based on the previous question, should TaAM-CPT be trained individually based on the validation performance of each dataset?\n- Section 3.5 mentions that TaAM-CPT improves the inference speed of the model, whether the authors have experimented on objective metrics?\n- The authors utilize LLM to generate prompts for auxiliary training. The method can be taken as a distillation from LLM. Whether the authors have compared the performance difference between TaAM-CPT and MLLM?\n- In section 4.4, the author discusses the feasibility of TaAM-CPT for infinite modes and categories, but I still have concerns about this. When creating prompts, it is possible to combine 2-3 labels in various ways. In extreme cases, exhaustively permuting a vast number of labels can be very time-consuming and may negatively impact the quality of the model.\n- With more modalities and labels, I concern that the burden of training intra- and inter-modalities will increase." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Extension to more modalities and labels:\nL201: \"When a new modality emerges, a new modality-specific prompt pool will be created, avoiding affecting the already learned other prompt pools. When a new label arises, a new class-specific prompt will be also added to each prompt pool, avoiding affecting the existing class-specific prompts either.\"\nThe paper claims that the approach is easily extendable to new modalities and new labels, but does not actually experiment with it.\nCould the authors further detail what would be the process for adding another modality or a new label? Would it be possible to keep the old prompts frozen and only train the new prompts? Would the performance on the new prompts be as good as if all prompts had been trained from scratch?\nHow will a new modality encoder with a different output dimension (e.g. 768) be introduced?\n\nChoice of Video as the \"Weak\" modality:\nL213: \"Furthermore, we find CLIP (Cherti et al., 2023) and CLAP (Wu et al., 2023b) have superior representation abilities for image and audio, compared to ViCLIP (Wang et al., 2024b) for video, specifically reflected in the zero-shot classification performance.\"\nOn which criteria do the authors claim the video modality as \"weak\"? A poor zero-shot performance does not necessarily mean that the modality is \"weak\" or difficult to process, it can also mean that the benchmark itself is hard.\n\nNumber of labels per query:\nIn the main paper, the stated number of labels per query is 2 for video and 3 for image and audio:\nL187: \"{Labels} indicates modality-specific labels, with a maximum of 2 for video modality, 3 for image and audio modalities.\" \nHow were these specific values determined? Why not 3 for video and 4 for image? What would be the impact of only using a single label per query? From Figure 6 it seems like some label combinations lead to captions that do not make much sense.\nMoreover, there seem to be a discrepancy in the stated number of labels per query: In the appendix, the stated number of labels per query is 2 for video and audio and 1 to 4 for image:\nL913: \"For video and audio datasets, we set the number of sampled categories to 2. For image, the number of sampled categories is set to 1, 2, 3, or 4\"\nCan the authors clarify?\n\nModality-specific labels:\nWhat if there are common labels across modalities, can the authors confirm that they are not merged together? For example, \"crying\" could be a video, audio or image label. In that case is it present 3 times in the N labels?\n\nNumber of categories:\nWhat is the value of the number of categories (v+a+w=N) used in this paper? How many for video, image and audio? Is it a concatenation of all the labels from benchmark datasets? \n\nUnclear ablation study for Prompt Design:\nThe section \"D Ablation Study - Prompt Design\" is unclear. Can the authors clarify what is being discussed exactly?\nIt seems to be about the initial prompts dimension and their projection into a lower-dimensional space but the motivation is not explained. \nWhat does FC stand for? Fully-connected layer? Please remind the reader about the setting adopted by TaAM-CPT in that regard, is it 512-d without transformation?\n\nMinor presentation notes:\nIn figure 1 and 2, N is used for representing the number of modalities. \nBut in the text, N represents the number of categories:\nL226: N is the total number of labels across all modalities. \n\nMinor typos:\nL040: \"classificatiot\"\nL145: \"a massive of labeled data\"\nL410: \"intergrated\"\nL464: \"110MB parameters\"" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "Simple approach:\nCompared to TaI-DPT (Guo et al., 2023) and follow-up work, the method presented in this paper is simpler as it does not involve complex multi-grained prompts.\nTaAM-CPT only uses a single prompt per category per modality which simplifies the approach.\n\nCode availability:\nThe authors of TaAM-CPT released the code for their implementation which is very valuable for reproducibility." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents TaAM-CPT, a prompt tuning approach for classifying a sample from any modality to a set of predefined categories.\nIt builds on pre-trained text-modality aligning models such as ViCLIP (text-video), CLIP (text-image) or CLAP (text-audio). \nKeeping these models frozen, TaAM-CPT tunes a set of prompt pools (one prompt pool per modality) to align to text representations directly in the representation space of the pre-trained models. The prompts are trained using a combination of inter-modal uni-directional contrastive loss and intra-modal ranking loss.\nThe paper reports performance on video classification, image classification and audio classification, but claims to be extendable to any number of other modalities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Significance of the quantitative improvement:\nThe inter-modal unidirectional contrastive learning is the main contribution claimed by this paper. It is ablated in Table 7 and Table 10. But without a statistical analysis of the results it is hard to evaluate the significance of the improvement.\nL959: \"when all modalities are trained together, the performance of each modality can be further improved.\" This does not seem to be the case though. \nComparing to independently training each modality, training all modalities jointly does not seem significantly better (53.8=>53.7, 65.8=>65.2, 92.5=>92.7)\n\nPoor presentation and clarity:\nThe problem tackled in this paper is not properly introduced. It is not clearly explained what prompt-tuning is and why it is interesting.\nThere are many important parts of the method that are not properly explained and remain unclear. See \"Questions\" section for details." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weaknesses, I will raise my score if all concerns are well-addressed." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. TaAM-CPT supports multiple modalities without needing labeled data, advancing universal representation learning.\n2. The model achieves state-of-the-art results across diverse tasks—zero-shot video, image, and audio classification—demonstrating its robust generalization capabilities across various modalities and datasets, a key advantage for multimodal applications.\n3. The organization of this paper is logical." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces TaAM-CPT, a scalable framework for general representation learning across unlimited modalities using only text data. Unlike existing methods that rely on large amounts of modality-specific labeled data or focus on a single modality, TaAM-CPT leverages prompt tuning, modality-aligned text encoders, and intra- and inter-modal objectives to harmonize learning across different modalities. With its flexible architecture, TaAM-CPT achieves top performance in zero-shot and classification tasks across 13 diverse datasets, spanning video, image, and audio classification, without the need for labeled data specific to each modality." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. One of my main concerns is that the proposed TaAM-CPT seems to be a combination of existing techniques, _e.g._, learnable prompts (soft prompts), Inter-/Intra-modal Learning strategies. The authors are expected to provide more discussions to better showcase their novelty and contributions. \n2. In Intra-modal Learning, the authors claim that the proposed method _'simplifies the design of the prompt and reduces the computational cost to half'_ (lines 242-243). However, the experimental section fails to adequately demonstrate the efficiency of the proposed approach. It would be beneficial for the authors to include quantitative metrics such as training cost or parameters to substantiate the robustness and flexibility of their approach.\n3. The clarity and simplicity of the paper's writing could be enhanced. Certain sentences were perceived as verbose and challenging to comprehend. Notably, lines 51-53 and Eq. 6 would benefit from refinement to improve their accessibility and ease of understanding." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A general representation model toward unlimited modalities without modality-specific labeled data." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024text,\ntitle={Text as Any-Modality for Zero-shot Classification by Consistent Prompt Tuning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vKL1i2p5Xr},\nnote={under review}\n}" }, "abstract": { "value": "The integration of prompt tuning with multimodal learning has shown significant generalization abilities for various downstream tasks. Despite advancements, existing methods heavily depend on massive modality-specific labeled data (e.g., video, audio, and image), or are customized for a single modality. In this study, we present Text as Any-Modality by Consistent Prompt Tuning (TaAM-CPT), a scalable approach for constructing a general representation model toward unlimited modalities using solely text data. TaAM-CPT comprises modality prompt pools, text construction, and modality-aligned text encoders from pre-trained models, which allows for extending new modalities by adding prompt pools and modality-aligned text encoders. To harmonize the learning across different modalities, TaAM-CPT designs intra- and inter-modal learning objectives, which can capture category details within modalities while maintaining semantic consistency across different modalities. Benefiting from its scalable architecture and pre-trained models, TaAM-CPT can be seamlessly extended to accommodate unlimited modalities. Remarkably, without any modality-specific labeled data, TaAM-CPT achieves leading results on diverse datasets spanning various modalities, including video classification (Kinetic-400/600/700), image classification (MSCOCO, VOC2007, NUSWIDE, VOC2012, Objects365), and audio classification (ESC50, US8K). The code is available at https://anonymous.4open.science/r/TaAM-CPT-0EA6." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multimodal Learning ; Prompt Learning; Zero-shot Classification;" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c5b75b116bf8cc4aa73df818db38d09b1617ac94.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/54d71d24eaa206499ccd5d489a3cb36fadd0a11c.zip" }, "title": { "value": "Text as Any-Modality for Zero-shot Classification by Consistent Prompt Tuning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vKgDbYKZrH
MOGIC: METADATA-INFUSED ORACLE GUIDANCE FOR IMPROVED EXTREME CLASSIFICATION
main
Active
recommendation systems;auxiliary information;extreme classification;metadata
unsupervised, self-supervised, semi-supervised, and supervised representation learning
5;5;5;6
4;3;2;5
3;2;3;3
2;2;2;3
2;1;2;2
5.25
3.5
2.75
2.25
1.75
0.774597
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "I did not see any flag of ethics concerns." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* I wonder whether the two-stage framework is needed when it could be feasible to train models directly with \"ground-truth\" metadata (and, of course, use the predicted metadata during inference). It would be great to patch an experiment to demonstrate the benefit of this two-stage design.\n* I guess PCA is one of the main reasons why Phi-2 and LLaMA-2-7b underperform DistiBERT as oracle models, especially the authors do not provide more details. I would suggest also reporting the performance without PCA, even if it would take a longer time.\n* It would be great to patch an efficiency analysis for both training and inference.\n* I would also suggest polishing the writing and organization to ease the reading. For instance, the concept of predicted and ground-truth metadata are not explained until the experiment section, but the term ground-truth metadata is used throughout the whole paper." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* Great improvements over different base models.\n* Both alignment and matching to approach the Oracle model are helpful.\n* Good theoretical analysis for the framework and optimization losses." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propose MOGIC to leverage an additional oracle model and metadata for extreme classification. In the first phase, an oracle model for early fusion is trained with metadata. Then a smaller encoder with the same architecture as OAK is trained by distilling the oracle model with auxiliary losses in the second stage. The experiments are conducted on some benchmark datasets based on Wiki. The experimental results show that the MOGIC framework can improve OAK across all datasets. The authors also conducted several studies to demonstrate the effectiveness of each component. Besides, there is also a theoretical analysis of the loss used in the distillation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The experiments only use the Wiki datasets, so the framework is unproven for other domains, such as e-commerce like Amazon datasets.\n* Performance gaps between baselines and MOGIC are unclear about whether they are from the framework, pre-trained oracle model, or ground-truth metadata in training.\n* Lack of reports and analysis on training and inference time when the authors emphasize the efficiency.\n* Writing and organization can be improved." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The authors write \"XC tasks involve sparse query representation, and are short-text in nature\". Is this key to the contribution here? It just seems like an inaccurate/overly general statement, e.g. see BioDEX as a fairly standard XC task where the queries are anything but short.\n\n2. The paper says \"discipline\" 3 times. Is this a typo? Is it supposed to be \"disciple\"?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The authors conduct evaluation on several benchmarks and by applying MOGIC over three different XC baselines, and show consistent gains across the board. This suggests the method has some fundamental additive capacity to add to the field of XC.\n\n2. The proposed method balances quality and cost, while boosting quality over baselines. This type of improvement is encouraging to see." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes MOGIC, a method for achieving high accuracy, low latency extreme classification (XC). In MOGIC, the authors first train an expensive early-fusion oracle classifier that can access metadata as text. Subsequently, this oracle is used to regularize the training of existing XC methods like OAK. This consistently improves quality by a couple of percentage points when applied over a variety of XC methods, including OAK, DEXA, and NGAME, on a few XC datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper is not particularly easy to follow. In particular, while I appreciate the potential generality of the proposed framework, the current presentation comes at the cost of concreteness. For one, the paper needs an end-to-end example of how different components interact with a given query at training and inference time, e.g. how OAK works and how OAK via MOGIC works. In general, the descriptions of the task and the oracle are a lot more complicated than I think they need to be.\n\n2. Building on #1, the discussion of the oracle, saying things like \"Oracle leads to very high accuracy on the downstream XC task but is computationally expensive to deploy. It entails too high inference times for any real world application, due to the large context length\" is quite perplexing. An \"oracle\" suggests to me access to privileged information, generally not available at test time; if so, the computational cost is the least of anyone's concerns for deploying the oracle. (I imagine I simply do not fully understand this section!) When the authors discuss presenting the \"Labels\" to the oracle, I'm left unsure if they mean concatenating all 312,000 labels (is this why the context is so long?) or the ground truth labels (why is that long, in that case?). Overall, the discussion of the oracle and the overall pipeline is fairly opaque.\n\n2. Given the increased training-time cost, and the complexity of the method (at least as currently presented notationally, see weakness #1), gains of 1-2 percentage points with a fairly standard intuition (i.e., distilling an expensive oracle) may not be the most rewarding tradeoff, in a way that weakness the core contribution of this work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- please section above" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The motivating problem (XC) has clear practical applicability, and the proposed approach (MOGIC) appears to be novel. Due to the paper's barely-fair organization and poor writing (including many non-grammatical and/or ambiguous statements that are difficult & time-consuming to parse), it is difficult to judge the paper's likely significance and impact." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper's focus is Extreme Classification (XC) tasks, in which the goal is to achieve high-accuracy within low-latency constraints. To address these challenges, the authors introduce MOGIC, which is a novel approach for metadata-infused oracle guidance for XC tasks. MOGIC is a 2-step process: first, it trains an early-fusion oracle classifier with access to both query- and label- side ground-truth textual metadata; then this oracle is used to guide the training of any existing memory-based XC student model via regularization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper has two main weakness: (1) it lacks any latency numbers for a problem (XC) in which latency is critical, and (2) it is extremely difficult to read and comprehend due to its poor writing (e.g., long, ambiguous sentences, and dozens of missing articles) and a no-better-than-fair organization. All of this makes it extremely challenging to judge the paper's contributions and expected significance/impact. \n\n1. ABSTRACT\n- given the centrality of latency in XC, please add a sentence that summarizes MOGIC's latency and how it compares to existing approaches\n- whenever you make claims on accuracy gains (eg, line 24), please quantify the claim as follows: \"MOGIC improves P@1 by X%, from Y% to Z%\". To be informative, the X% value must be judged in the context of Y% and Z%\n\n2. Introduction\n- please intuitively define/introduce, ideally with an illustrative example, all the key concepts of the paper: early/late-stage fusion, metadata-infused oracle, memory-based/free models, query-/label-side ground-truth, etc\n- please restructure and re-write this section along the lines of the one in [Mohan et al, 2024]. That paper's introduction is easy to read and assimilate, and, as such, a great example to emulate. After I read Mohan's intro, it became much easier to understand yours.\n- due to the odd formatting of the first column in Table 1, you should improve its readability by drawing the horizontal lines for each row. You should also (i) add the relevant query- and label- side meta-data [see lines 88-89], (ii) intuitively explain (for a general audience) the entire process, rather then relying on the abstract row-names for rows 2-4, and (iii) explain the reason behind the mistakes of the the four approaches, with an emphasis on OAK, Oracle, and MOGIC (in particular, why the errors of MOGIC are disjoint from those of the Oracle?)\n- ideally, Table 1 please should have an additional row for MOGIC( NGAME ), which is listed in the abstract as a major contribution. \n- ideally, there should be an additional table (similar to Table 1) that intuitively explain the differences between the results with early- vs late- fusion\n- lines 93-102 are just \"tuning Table 1 into prose,\" without adding any insides or intuitions. As such they should be replaced insights/intuitions (or simply removed)\n- lines 139-144: to increase the readability of this long, reference-rich sentence, you should replace the comma before \"text-based\" with a semi-column; then re-write the second part by bringing tabular data first, as it only was two references (vs 1+ lines of them)\n- the caption of Table 2 is too long and too in-depth; move most of it the main narrative\n- please replace the Low/High/VeryHigh labels in the last column by actual numbers; eg, O( ms ). \n- Figure 1 is hard to read and interpret: (i) if the green box is \"the architecture of the OAK disciple\" then why does the horizontal-bracket labeled \"Disciple\" also covers the Encorder fed by the \"Label: Jellybean\" rectangle?, (ii)overall, you should spend a new paragraph that provides an intuitive, step-by-step explanation of what takes place in Fig 1. \n\n3. Experimental Results\n- in all tables, please use BOLD for the best result and UNDERLINED-ITALIC for the runner-up\n- all tables/figures should include the results on all four datasets (not only a subset of them); at the very least, they should be added to the APPENDICES \n- lines 338-342: provide references for both \"the plethora of papers\" and \"the few of them that offer ground-truth data\"\n- Table 3: please provide the details on how EXACTLY you computed all values in the last four columns; for example, you have an \"Avg Q/L\" of 2.11, but 693K/312K = 2.22; similarly, in the first row \"the nmb of memory items is smaller than the nmb of training queries\", but it is larger in the second row; please explain why?\n- for each of the four datasets, please select-and-justify a reasonably-ambitious target-latency (eg, answering N queries per second); then create a table (similar to Table 4) in which you compute the actual latencies for all the various approaches\n- Table 4: please discuss the results where MOGIC is outperformed, especially when it loses to ANCE by 7.4% (50.99% vs 43.60); similarly, why does OAK outperform MOGIC on the same metric/dataset?\n- Table 5: how comes that, by itself, DistilBERT heavily outperforms the other two oracles, but within MOGIC the differences are minimal?\n- line 460: please quantify \"more powerful and larger oracles;\" the reader should have immediate access to this info in your paper\n- the entire 4.2 section reads like \"a bag of tables and results;\" please re-organize it to emphasize the main results (eg, as 4..2.1, 4.2.2, etc) \n\n\n- last but not least, in the robustness analysis\n (i) please show numbers for MOGIC's competing approaches, too\n (ii) please discuss in depth the sources/hypotheses for this robustness: is there any redundancy in the data sources? it is highly-counterintuitive that using only 40% of the data barely impacts MOGIC, while significantly impacting the Oracle (Table 9)\n (iii) same fore noise: how could be MOGIC barely impacted if 60% of the data is incorrect? also, when the impact on the oracle is greater than 50%, what factors contribute to \"fading the impact of noise\" on MOGIC by about an order of magnitude?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.Why did you only present results on the LF-WikiSeeAlsoTitles-320K dataset when discussing MOGIC's generalizability across different disciple models? Does MOGIC still demonstrate generalizability on other datasets?\n2.In the theoretical proofs in Appendix A, the loss function is assumed to be a decomposable binary loss rather than the non-decomposable triplet loss. Does the conclusion hold under triplet loss circumstances? Additionally, the alignment loss defined in line 797 has an asymmetric form. If that is not a typographical error, why is the binary loss of (xi*, zi, yi) calculated inner the binary loss of (xi, zi*, yi)?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.The proposed method finds an efficient way to distill the oracle model, enhancing the disciple model’s ability to embed extreme classification data while maintaining low inference latency. \n2.The authors conducted thorough experiments to validate the effectiveness of the proposed method.\n3.Comprehensive implementation and experiment details underscore the soundness and practical viability of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an oracle guidance framework for enhancing extreme classification tasks by integrating metadata through early-fusion techniques. By distilling from the oracle model, the disciple model can generate high-quality embeddings while maintaining low inference latency. Experimental results on benchmark datasets show that MOGIC consistently enhances performance on XC task, surpassing state-of-the-art methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.In robustness analysis section, the author mentioned the MOGIC framework is more robust to Oracle models. Further analysis should be conducted on this phenomenon to verify that the robustness stems from the proposed MOGIC framework rather than from the OAK method itself.\n2.The theoretical analysis on the oracle-guided losses is unaligned with the experiment implementation. Please see question 2 for details.\n3.The presentation of the paper could be improved. There are many writing errors in the paper that hinder understanding, e.g. the unfinished caption of table 10." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Oracle guided enhancement of memory representations improves task performance" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024mogic,\ntitle={{MOGIC}: {METADATA}-{INFUSED} {ORACLE} {GUIDANCE} {FOR} {IMPROVED} {EXTREME} {CLASSIFICATION}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vKgDbYKZrH},\nnote={under review}\n}" }, "abstract": { "value": "While retrieval-augmented classification and generation models significantly benefit from the early-stage fusion of high-quality text-based auxiliary metadata, often called memory, they suffer from high inference latency and poor robustness to noise. In classifications tasks, particularly the extreme classification (XC) setting, where low latency is critical, existing methods incorporate metadata for context enrichment via an XC-based retriever and obtain the encoder representations of the relevant memory items and perform late-stage fusion to achieve low latency. With an aim of achieving higher accuracy within low latency constraints, in this paper, we propose MOGIC, an approach for metadata-infused oracle guidance for \nXC tasks. In particular, we train an early-fusion oracle classifier with access to both query- and label-side ground-truth metadata in the textual form. The oracle is subsequently used to guide the training of any existing memory-based XC disciple model via regularization. The MOGIC algorithm, when applied to memory-based XC disciple models such as OAK, improves precision@1 by 1-2% and propensity-scored precision@1 by 2-3% on four standard datasets, at no additional inference-time costs to the disciple. We also show the feasibility of applying the MOGIC algorithm to improve the performance of state-of-the-art memory-free XC approaches such as NGAME or DEXA, demonstrating that the MOGIC algorithm can be used atop any existing XC-based approach in a plug-and-play manner. Finally, we also show the robustness of the MOGIC method to missing and noisy metadata settings." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "recommendation systems", "auxiliary information", "extreme classification", "metadata" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5c88f112db29e46ac7e60cc06e19dc9d92925b61.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MOGIC: METADATA-INFUSED ORACLE GUIDANCE FOR IMPROVED EXTREME CLASSIFICATION" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vL9t9tpKli
Latent Radiance Fields with 3D-aware 2D Representations
main
Active
3D Gaussian Splatting;3D-aware Representation
applications to computer vision, audio, language, and other modalities
5;5;5;6
5;4;4;4
3;2;3;4
2;3;2;3
3;3;3;3
5.25
4.25
3
2.5
3
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I'm willing to raise my score if questions below is answered:\n\n1. How is \\lambda_{ij} computed in equation (6), section 4.1? Basically, how to compute the average pose error and how it contributes to the weight \\lambda_{ij}?\n2. Since equation (6) becomes multi-objective optimization, does this change largely increase the training convergence time? Did you experience any convergence issues?\n3. How is the inference speed of this pipeline?\n4. The idea is kind of similar to CaesarNeRF: Calibrated Semantic Representation for Few-shot Generalizable Neural Rendering: https://haidongz-usc.github.io/project/caesarnerf, which also uses calibrated image feature in each 2d latent, could you cite and compare?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "There are many innovations in this work, but I think the best part is the introduction of the 3d awareness into the 2D representation training. In this part, especially the correspondence aware autoencoding is the key to the success of this overall idea." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This submission tries to resolve the problem of 3D reconstruction in the latent space via a 3-stage idea. The first stage focuses on improving the 3D awareness of the VAE's encoder via a correspondence-aware constraint on the latent space; the second stage builds a latent radiance field (LRF) to represent 3D scenes from the 3D-aware 2D representations; the last stage further introduces a VAE-Radiance Field (VAE-RF) alignment method to boost the reconstruction performance. The results generated by this pipeline out-perform the results generated by many state-of-the-arts." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are still some weaknesses prevented me from giving a higher score, especially, the details of how to compute each component of the pipeline. Please see my questions below. In addition, some related references are missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Could the author please provide further clarification on the motivation and applicable scenarios for this work? \n- In the experimental section, it would be beneficial to employ fairer comparison methods; using low-resolution training for comparison models is not advisable. \n- Please consider using more recent models to compare the text-to-3D generation capabilities of this work." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The author is committed to integrating 3D awareness into the 2D latent space, and the results show a significant degree of success in this endeavor. Additionally, using 3D Gaussian Splatting (3DGS) in modeling the latent space is an intriguing idea." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The author introduces pixel-to-pixel correspondences across different viewpoints to help the VAE learn a 2D latent space with 3D awareness. Using 3D Gaussian Splatting (3DGS), they perform 3D reconstruction and rendering in the latent space to obtain 2D latent representations from specified camera poses. The rendered results are then decoded back into image space by the decoder to obtain RGB images.\n\nExperimental results demonstrate that the resulting 2D latent space possesses a certain level of 3D perception capability and outperforms existing methods when decoding to higher-resolution images." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The motivation of this paper is somewhat unclear. Is the author aiming to improve reconstruction accuracy, enhance rendering speed, reduce storage space, or achieve some other application? It appears that none of these goals have been fully addressed.\n\n**Reconstruction Accuracy**: When training the comparison methods, the author down-scaled the RGB images to the same resolution as the latent representation before training, which may be considered unfair. The VAE used by the author has been exposed to high-resolution images, while the comparison methods have not. This discrepancy could reduce the reconstruction performance of the comparison methods and impact the paper's credibility.\n\n**Rendering Speed**: In Section 5.1, the author reports the training times for Stage 1 and Stage 3, but not for Stage 2 or for inference time. Therefore, it is challenging to conclude that the proposed method has a faster rendering or training speed compared to other methods.\n\n**Storage Space Reduction**: In Section 5.1, the author mentions the need to \"train the same number of latent 3D Gaussian splatting scenes... for Stage-III,\" indicating that Stage 2 is scene-specific. Compared to 3DGS, this does not seem to save much storage space.\n\n**Other Applications**: In Section 5.3, the author suggests that their work can be used for text-to-3D generation. However, the two methods used for comparison are relatively outdated. It is recommended to compare the method with more recent approaches, such as IPDREAMER [1], to make the claim more convincing.\n\n[1] Zeng, Bohan, et al. \"Ipdreamer: Appearance-controllable 3d object generation with image prompts.\" arXiv preprint arXiv:2310.05375 (2023)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How to handle the scale mismatch between COLMAP and latent space? Please clarify: 1) the exact mapping strategy from pixel to latent correspondences; 2) how multiple pixel correspondences within one latent cell are aggregated\n- What's the correspondence filtering pipeline? In particular: 1) thresholds used for COLMAP matching 2) any additional filtering criteria in latent space 3) how outlier correspondences are handled.\n- During VAE-RF alignment (Section 4.3), how to: 1) balance the training/novel view losses 2) prevent overfitting during decoder fine-tuning.\n- Regarding the resolution setup, why choose this specific resolution comparison protocol? Would the method's advantages hold at equal resolutions?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper follows a standard pipeline structure addressing latent space 3D reconstruction. The method section breaks down into three components: correspondence-aware encoding, latent radiance field construction, and VAE alignment. The ablation study provides basic validation of these components, though more comprehensive analysis would be beneficial.\n- While building heavily on existing techniques, the paper demonstrates competent engineering in combining different elements into a working system. The adaptation of correspondence constraints and 3DGS to latent space shows reasonable technical implementation. The provided implementation details outline the basic approach.\n- The evaluation includes tests on multiple datasets (MVImgNet, NeRF-LLFF, MipNeRF360, DL3DV-10K), attempting to demonstrate applicability across different scenarios. While the cross-dataset evaluation has limitations, it provides basic evidence of generalization capability. The inclusion of both novel view synthesis and text-to-3D generation shows the method's potential utility, though more thorough evaluations are needed.\n- The method functions without per-scene refinement modules, which could be advantageous compared to some previous approaches." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a framework for constructing radiance field representations in latent space, aiming to bridge the domain gap between 2D feature space and 3D representations. The authors propose a three-stage pipeline: (1) a correspondence-aware autoencoding method that enforces 3D consistency in latent space through correspondence constraints, (2) a latent radiance field (LRF) that lifts these 3D-aware 2D representations into 3D space, and (3) a VAE-Radiance Field alignment strategy that improves image decoding from rendered 2D representations.\n\nThe key technical contribution is the integration of 3D awareness into 2D representation learning without requiring additional per-scene refinement modules. The authors adapt the 3D Gaussian Splatting framework to operate in latent space, using spherical harmonics to model view-dependent effects. They demonstrate their method's effectiveness on both novel view synthesis and text-to-3D generation tasks across various datasets including MVImgNet, NeRF-LLFF, MipNeRF360, and DL3DV-10K. The authors claim their approach is the first to achieve photorealistic 3D reconstruction performance directly from latent representations while maintaining cross-dataset generalizability.\n\nThe work represents an attempt to make latent 3D reconstruction more practical by addressing the geometric consistency issues in existing approaches. The framework is designed to be compatible with existing novel view synthesis and 3D generation pipelines without requiring additional fine-tuning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper fails to provide compelling justification for operating in latent space. While previous works like Latent-NeRF (for text-to-3D generation) established initial groundwork, this paper does not clearly demonstrate additional benefits of its approach. The motivation for operating in latent space remains questionable. The paper shows modest improvements in PSNR/SSIM metrics but doesn't address fundamental questions: What are the computational advantages over image-space methods? How does memory consumption compare? Why is the added complexity of latent space operations justified? The authors should conduct a thorough efficiency analysis, measuring training time, inference speed, and memory usage against image-space baselines. Without such evidence, the practical value of the latent space approach is hard to justify.\n- Section 4.1's correspondence mechanism has several fundamental issues. Most critically, the paper fails to address the scale mismatch between COLMAP's pixel-level correspondences and the VAE's latent space. Given that the VAE operates at a lower resolution (likely 8x or 16x downsampled) with larger receptive fields, how are pixel-level correspondences meaningfully mapped to latent features? This mapping is non-trivial: a single latent code typically corresponds to a large receptive field in pixel space, making precise correspondence matching questionable. The paper should answer: How are multiple pixel correspondences within one latent cell handled? How does the receptive field size affect correspondence accuracy? Additionally, basic details are missing: the correspondence filtering criteria, quality metrics, and robustness to matching errors. The use of L1 distance for latent features (Eq. 6) needs justification, especially given the coarse nature of latent correspondences. These technical gaps raise serious concerns about the method's fundamental soundness.\n- The use of spherical harmonics in latent space (Eq. 8) is puzzling. Given that the features are already in a learned latent space, why introduce SH basis functions? A direct learnable decoder or simpler view-dependent representation might suffice. Similarly, the VAE-RF alignment stage seems unnecessarily complex - the authors may quantify the alleged distribution shift and explore simpler alternatives. These design choices add complexity without clear benefits.\n- The experimental setup has a fundamental flaw: image-space methods are handicapped by low-resolution inputs while the proposed method has access to high-resolution data. This creates an artificial advantage for the proposed method. A fair comparison requires either: testing at matched resolutions, or demonstrating specific benefits under computational constraints. The ablation study skips crucial experiments on correspondence quality, loss function components, and architectural variations. These missing comparisons make it difficult to assess the true value of each component.\n- The paper sidesteps important practical concerns. Where are the failure cases? How does the method handle challenging scenes with varying illumination or complex geometry? The text-to-3D generation results lack comparisons with current state-of-the-art methods. The claim of \"photorealistic reconstruction\" needs validation through proper user studies or established perceptual metrics. Testing on more diverse, challenging scenarios would better demonstrate real-world applicability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I think the strict view-inconsistency can not be solved fundamentally due to the emplyment of VAE Decoder. But it coule be possible to showcase more view consistent results and evlaute the 3D consistency of proposeld methods. \n\nThe author provide more details about the experiment settings, especially the view sampling strategy and the number of views used in the experiment. I hope see whether the proposed method can be applied to a sparse-view setting." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Strengths*:\nWith the proposeld framework, this paper enhances the 3D consistency of 2D latent representations as well as effectively mitigated the gap between the 2D latent space and the natural 3D space." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper target at Latent 3D reconstruction, and address the domain gap between 2D feature space and 3D representations. They proposed propose a novel framework that comprise (1) a correspondence-aware autoencoding method, (2) a latent radiance field (LRF), and (3) a VAE-Radiance Field (VAE-RF) alignment strategy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Weaknesses*:\n\n1. Compared with feature-GS, it is a good improvement to add a correspondence-aware constraint during VAE encoder finetuning to improve its 3D awareness. However, this approach still cannot guarantee strict multi-view consistency of the encoded multiview features. As a result, after constructing the LRF, there may be a blurred radiance field with significant detail losses. Although the LRF is 3D consistent, the final decoded features may still exhibit noticeable flickering effects due to the lack of view consistency of decoder.\n\n\n2. I think this paper still needs optimization during the LRF stage instead of using a total feed forward method. When compared with 3DGS and Mip-Splatting, the authors only train them in latent space resolution (8 times lower than the image resolution), which yields very bad visual results. I suggest that the authors consider training the competing methods at full resolution. This paper may not work better than Mip-Splatting when trained with full resolution and very dense view, but it would be interesting to see if it outperforms Mip-Splatting in a sparse-view setting. This might be achieved because of generation capability of Stable Diffusion VAE used in this paper.\n\n3. The author didn't provide a detailed explanation of experiment settings. For example, I wonder how views are sampled in the experiment during training. And how many views are used as input and evaluation repectively. This is important for me to fully evaluate this paper. \n\n4. Object change in final rendering. As shown in Fig 1, the building in the final rendering is different from the ground truth image. One is in red, the other one is in white. This lead to my concern of identity preserving capability of purposed method. I think this is a problem that needs to be addressed." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "To our knowledge, this is the first work demonstrating that radiance field representations in the latent space can achieve decent 3D reconstruction performance across various settings including indoor and unbounded outdoor scenes." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024latent,\ntitle={Latent Radiance Fields with 3D-aware 2D Representations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vL9t9tpKli},\nnote={under review}\n}" }, "abstract": { "value": "Latent 3D reconstruction has shown great promise in empowering 3D semantic understanding and 3D generation by distilling 2D features into the 3D space. However, existing approaches struggle with the domain gap between 2D feature space and 3D representations, resulting in degraded rendering performance. To address this challenge, we propose a novel framework that integrates 3D awareness into the 2D latent space. The framework consists of three stages: (1) a correspondence-aware autoencoding method that enhances the 3D consistency of 2D latent representations, (2) a latent radiance field (LRF) that lifts these 3D-aware 2D representations into 3D space, and (3) a VAE-Radiance Field (VAE-RF) alignment strategy that improves image decoding from the rendered 2D representations. Extensive experiments demonstrate that our method outperforms the state-of-the-art latent 3D reconstruction approaches in terms of synthesis performance and cross-dataset generalizability across diverse indoor and outdoor scenes. To our knowledge, this is the first work showing the radiance field representations constructed from 2D latent representations can yield photorealistic 3D reconstruction performance." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "3D Gaussian Splatting", "3D-aware Representation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e284282e8c291f0b5a555ef11e96d5703a9ad03a.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Latent Radiance Fields with 3D-aware 2D Representations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vM4CdVScT8
Quantum Entanglement Trees: Optimizing Quantized Matrix Quantization via Element Replacement and Residual Clustering
main
Active
Matrix quantization;LLM Weight Quantization;KV Cache Quantization;Residual Quantization
infrastructure, software libraries, hardware, systems, etc.
3;3;5;5
5;4;2;3
2;2;2;3
2;2;2;1
1;2;2;3
4
3.5
2.25
1.75
2
-0.894427
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I am a bit confused about the overarching intent when limiting the occupied space to the original size of the matrix \"under the condition that the quantized matrix occupies the same memory space\". If the intent is to compress the matrix, one would believe that the output artifacts (quantized data + codebook + residuals) should occupy less space than the original in order to justify it being compression. The only other reason to compress would be for computational running time benefits which is not the metric being optimized in the paper." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well written and wasy to understand.\n- The central idea is a simple, effective and intuitive technique to increase the efficacy of current quantization methods.\n- They showcase their method on various modern data domains of interest (datasets, LLM layers and KV caches) which are all active and pertinent areas of research.\n- Their method outperforms sensible baselines and works on cases whether other methods fail to work (square matrices compressed using OPQ etc)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method to compress matrices by inducing local order (via swapping) amongst matrix elements using an iterative technique and then leveraging known product quantization techniques to compress the matrix. They further quantize the codebook and residual matrix to reduce their space requirements." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "# Major\n- I am concerned about the technical novelty since the notion of element re-ordering for better compression is well known and has been suggested in various works [1-2]. I am sure there are more references that can be found on doing a deeper survey. \n- Moreover, the key contributions which are \"Abstracting the problem\", \"Designing the QET algorithm\" are reasonable expectations of any technical work. \n- \"Additional Optimizations\" are quite straightforward applications of RTN quantization on the artifacts involved in most compression techniques. \nIt is my opinion that the differential addition of the work compared to known concepts and literature does not meet the bar to publish.\n\n# Minor\n- The name of the technique is quite unrelated to the content of the paper. Even the notion of trees is simply due to the construction of recursive partitioning to the best of my understanding.\n# References\n1 Olken, Frank, and Doron Rotem. \"Rearranging data to maximize the efficiency of compression.\" Proceedings of the fifth ACM SIGACT-SIGMOD symposium on Principles of database systems. 1985.\n2 Chhugani, Jatin, et al. \"Compressing large boolean matrices using reordering techniques.\" Proceedings 2004 VLDB Conference: The 30th International Conference on Very Large Databases (VLDB). Morgan Kaufmann, 2004.\n\n# Typos\n- Missing space on L139" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What's the clustering algorithm used in this method? What's the reason for such choice?\n2. Typo. At the end of Line 199, 'as shown in Algorithm 2' -> 'as shown in Algorithm 1'." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper studies an important problem, i.e., matrix quantization, since reducing memory consumption of LLM contributes the saving of resources.\n2. The authors give theoretical proofs about lower loss and time complexity.\n3. The experiments show a consistent gains over baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on matrix quantization problem. The authors firstly formulate Quantization Error Minimization (QEM) problem, and then propose Quantum Entanglement Trees (QET) to address the QEM problem, with two additional optimizations. Experiments show the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The quality of presentation is low. Firstly, the first two paragraphs of Introduction and whole Conclusion are highly repeated in Abstract. Secondly, the authors did not include a section about related work, so I cannot fairly evaluate the contribution of this paper over current developments.\n2. Memory of Indicator Map. This paper attempts to force local orderliness, but this operation will need additional spaces to save Indicator Map. Specifically, the additional space at each layer is always half of original matrix. In this case, although the matrix is quantized finally, these spaces are not ignorable.\n3. Experimental baselines are too old, where the newest one was published in 2014. Honestly, this direction is not my major, so I do not know if there are other related work to compare published recently.\n4. The Residual Quantization Optimization essentially does the quantization-dequantization process twice, doubling the algorithm complexity and accumulating the errors." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- What is the purpose of including both the penalty term and the hard constraint in the formulation of the QEM problem?\n- Which method is used for the RTN algorithm (e.g. absmax quantization)?\n- Please check your code release, currently when running your provided QET example as-is, the following error occurs: ValueError: Cannot ask kmeans2 for 0 clusters (k was 0)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Matrix quantization is an important practical problem with many uses in modern applications of machine learning, such as quantizing LLM weights.\n- The proposed method is conceptually simple and is easy to implement in practice. The method achieves a better compression ratio than PQ with fixed memory usage." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new method for matrix quantization based on improved local ordering of the matrix elements. The method is based on recursively rearranging adjacent elements of the matrix, and then applying product quantization to the resulting matrix." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I do not see what the purpose of proposing the Quantization Error Minimization (QEM) problem is since it just adds an unnecessary penalty term to the obvious problem formulation of minimizing the MSE subject to a memory constraint. It would be helpful for the authors to clarify this point. The specific QEM formulation with the penalty term is not used anywhere, including in the derivation of the QET algorithm or any of the experiments.\n\nThe proposed QET algorithm is based on a local ordering of the matrix elements before applying product quantization (PQ). Naturally, this increases both the dequantization and the quantization time compared to PQ, but this is not evident in the main paper (you can see it in Figure 4 in the Appendix). Theorem 2 (which states that the quantization time decreases) essentially states that for a given memory usage, you can afford to use fewer centroids, and therefore the clustering will be faster. This is misleading: you should state complexities depending on the number of centroids -- if your method can achieve the same compression with fewer centroids, that is a separate point. More importantly, Theorem 3 (which states that dequantization time increases) as well as experiments on the (de)quantization time are hidden in the Appendix.\n\nEssentially, by spending more time on both quantization and dequantization, you are able to achieve better compression. However, most applications, including the motivating applications mentioned in the paper such as LLM weight quantization and vector databases, care greatly about the dequantization time. If you care about pure compression, you could afford to run a more comprehensive optimization method. The proposed method is much slower for dequantization and is also less suitable for implementation on hardware accelerators. Finally, the two proposed tweaks over vanilla QET are very straightforward and you can also apply them to PQ.\n\nIn the experiments, you should not compare against OPQ and LOPQ as they are not really suitable for this problem. Instead, you should consider comparing against e.g. the method proposed in the SqueezeLLM paper. It would also be ideal to demonstrate that your method is suitable for the problems you use as motivation, such as LLM weight quantization, in practice. However, what you'll likely find is that since e.g. the codebook should fit into the L1 cache on GPUs, methods based on PQ are not very suitable for the problem (which is why you don't see them used in practice). Finally, you should include the experiments on the dequantization time in the main paper.\n\nTo improve the presentation, you should present the novel components of the algorithm more clearly by writing the description of Algorithm 1 (and its pseudocode) such that it refers to PQ where appropriate or uses it as a subroutine. You should also include a related work section where you discuss the relationship of your method to the prior work more clearly. Further minor points on presentation:\n - $\\Delta$ in Theorem 2 is only explained in the Appendix.\n - It is confusing that LOPQ is in the legend of Figure 2 but does not appear in any of the figures.\n - Use \\citep instead of \\cite for proper parenthetical citations.\n - I would reconsider the name of the method, as the link to quantum entanglement is tenuous." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "There is no way that this paper can be revised to be published. It should be rewritten entirely." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "S1. The paper takes into account the size of the codebooks in their evaluation, and proposes to compress the codebooks as well (using scalar quantization)\n\nS2. The method can be seen as a way to replace the quantization of vector components with the compression of permutation matrices." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a way to compress sets of vectors (weights of linear layers or token embeddings). \nThe method relies on a re-ordering of vector dimensions prior to applying product quantization, possibly followed by vector quantization. \nThe authors demonstrate that the method improves the accuracy compared to other PQ variants." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1. The explanation of the method is imprecise and intuitive to a point that is impossible to follow. Section 3.1 does not describe in which dimension the \"elements\" are compared, the mulitple matrices are not introduced, the \"regularity of matrix elements\" is not defined. Section 3.2 re-states the algorithm in a way that is not much more intelligible -- what does \"based on their sizes\" mean? In algo 2, lines 5-13 why is i divided by 2, line 16 is a for loop, lines 18-19 are hand-waivy. In the \"theorems\" \\Delta is not introduced. \n\nW2. There is no justification of why the method would be working. Shuffling the vector dimensions independently means that elements of different dimensions are used to train the clustering, destroying the data distribution of a given PQ subvector \n\nW3. The comparison with the SOTA uses relatively weak baselines -- PQ variants without residuals, while it is well known that quantization with residuals works better. \n\nW4. The comparison is in terms of MSE and MAE, not end metrics. For example, for the KV cache compression the resulting attention matrix could be compared in MSE with the exact attention result -- and ideally be tried end-to-end. The text analysis is not much more clear." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce Quantum Entanglement Trees, an algorithm that optimizes matrix quantization by reordering elements to exploit local orderliness, significantly enhancing quantization in the weights of LLMs and KV caches." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024quantum,\ntitle={Quantum Entanglement Trees: Optimizing Quantized Matrix Quantization via Element Replacement and Residual Clustering},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vM4CdVScT8},\nnote={under review}\n}" }, "abstract": { "value": "The matrix quantization entails representing matrix elements in a more space-efficient form to reduce storage usage, with dequantization restoring the original matrix for use. We formulate the Quantization Error Minimization (QEM) problem as minimizing the distance between a matrix before and after quantization, under the condition that the quantized matrix occupies the same memory space. Matrix quantization is crucial in various applications, including Large Language Models (LLMs) weight quantization, vector databases, KV cache quantization, graph compression, and image compression. Recent advancements in LLMs, such as GPT-4 and BERT, have highlighted the importance of matrix compression due to the large size of parameters and KV cache, which are stored as matrices. \n\nWe propose Quantum Entanglement Trees (QET) to address the QEM problem by leveraging the local orderliness of matrix elements, involving iterative element swapping to form a locally ordered matrix. This matrix is then grouped and quantized by columns. To enhance QET, we introduce two optimizations: Residual Quantization Optimization (RQO), which reduces MSE by quantizing the residuals between the original and dequantized matrices, and Codebook Quantization Optimization (CQO), which reduces storage requirements by compressing the codebook itself.\n\nExperimental results demonstrate that QET can effectively reduce MSE to 5.05\\%, 13.33\\%, and 11.89\\% of the current best method on the LLM dataset, K cache, and V cache, respectively.\nOur contributions include the abstraction of the QEM problem, the design of the QET algorithm, and the proposal of two optimizations to improve accuracy and speed." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Matrix quantization", "LLM Weight Quantization", "KV Cache Quantization", "Residual Quantization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/432bf440c0244eaf7933a8a662925c16e85e8c83.pdf" }, "presentation": null, "primary_area": { "value": "infrastructure, software libraries, hardware, systems, etc." }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Quantum Entanglement Trees: Optimizing Quantized Matrix Quantization via Element Replacement and Residual Clustering" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vM94dZiqx4
Long-tailed Adversarial Training with Self-Distillation
main
Active
Adversarial Robustness;Adversarial Training;Long-Tail Distribution Learning
alignment, fairness, safety, privacy, and societal considerations
5;6;6;6
4;4;3;4
3;3;3;3
2;3;3;3
2;2;4;3
5.75
3.75
3
2.75
2.75
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The observation that adversarial training further harms minority classes compared to normal training is interesting and may inspire further works. \n\n2. The performance gain under the specific experiment setting is good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper improves model robustness against adversarial attacks in long-tailed learning. \n\nThe authors observed that adversarial training, compared with normal training, further sacrifices minority class accuracy over majority classes. This observation is interesting and quite intuitive after some thoughts. It could be inspiring to further works. \n\nBased on this observation, the authors proposed to use stronger constrains than previous methods to balance the accuracy among different classes. Specifically, previous methods uses balance-aware loss functions to balance accuracy in long-tailed adversarial training. The authors added resampling (when training the teacher model), another common technique to handle data imbalance, on top of those previous works. The overall method is limited in technical novelty. \n\nExperimental results show good improvements over previous methods. The experimental settings raise some concerns to me. \n\nOverall, I think this is a marginal paper. I lean toward accepting primarily due to the good numerical results shown under the specific experiment setting used by this paper." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The technical novelty is limited. The two methods used to handle long-tail learning in this paper are resampling (when training the teacher model) and a balance-aware loss function proposed by previous works. \n\n2. The proposed method has a noticeable limitation: It requires to train a teacher model on balanced data. This greatly limits its practical application. In most cases, the powerful large model used as teacher models (e.g., foundation models downloaded from Hugging faces) are trained on real-world datasets which are typically unbalanced. It is very hard to get an off-the-shelf model that is \"balanced\". This requires the users to train the balanced model on their own. As a training algorithm, this is considerable complexity overhead. Moreover, it is not discussed in the paper how the teacher model could affect the distillation. Does the re-sampling method matter? Does the size of the teach model matter? Does the re-sampled dataset has to be perfectly balanced or it could have some imbalance? If it is the later case, how much imbalance it could have?\n\n3. Table 5 is not interpretable. Which part is CIFAR10 and which part is CIFAR100?\n\n4. The experiment setting is limited. The largest imbalance ratio is 50 for CIFAR10 and 10 for CIFAR100 and ImageNet. However, on both datasets, it is common to experiment with much stronger imbalance, with the ratio up to 100 [1]. Experimenting with stronger imbalance ratio is important since it is a more challenging case for re-sampling based methods. Under the stronger ratio, I assume the training of the teacher model would face difficulty cause the minority classes have few samples and simple up-sampling might not help too much. \n\n[1] Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Please see weaknesses and clarify the imbalance ratio better, and conduct statistical tests for comparison with other methods.\n- It’s unclear why the authors report the AutoAttack accuracy on the whole dataset but not for the tailed classes. I would like to see the AutoAttack accuracy on the tailed classes for Figure 1 and all relevant tables." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The key strengths of the paper are:\n- The paper is mostly written well.\n- The authors conduct comprehensive experiments, as well as provide theoretical motivations.\n- While the improvement methods use standard techniques like balanced softmax and indirect gradient matching, it's interesting to see that a simple prior distillation step enhances performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors first provide a theoretical analysis explaining why adversarial training leads to robustness degradation in tail classes. Motivated by these findings, they propose a simple empirical solution to mitigate this issue. Specifically, they employ a distillation-based approach in which a balanced version of the long-tailed dataset is used to adversarially train a teacher model. This teacher model is then used to train a student model on the original long-tailed dataset. The authors conduct experiments across multiple datasets and various settings, such as with and without augmentations, different imbalance ratios, and more." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The concept of Imbalance Ratio in the paper is not well explained. While I understand how it applies to a binary setup, it’s unclear what an imbalance ratio of 50 means for CIFAR-10-LT compared to an imbalance ratio of 10. I suggest that the authors include the number of samples in the least populated class for clarity.\n- Related to this, the core assumption is that training on a balanced dataset will enhance feature learning for the tail classes. However, the proposed method may negatively impact overall performance if the long-tail ratio is extreme (e.g., 5 samples per class). This could lead to ineffective training of the teacher network, and the distillation loss may then degrade performance. I would be interested to hear the authors' thoughts on this.\n- Additionally, given the smaller sample size (especially for tail classes), it would be helpful to report the mean along with error metrics, such as standard deviation, and to conduct statistical tests to verify that the proposed method significantly improves performance compared to other methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The authors effectively justify the need for adversarial training methods tailored to long-tailed distributions through their theoretical analysis (Section 3.2) and empirical evaluations (Section 3.3).\n\n- An intriguing result is the observation that adversarial training unexpectedly reduces robust performance on tail classes compared to non-adversarial methods when addressing long-tailed distributions.\n\n- The experimental results demonstrate substantial improvements in both clean and robust accuracy on tail classes, outperforming existing adversarial training (AT) methods such as TRADES, MART, and AWP, as well as long-tailed AT methods like RoBal, REAT, and AT-BSL.\n\n- These performance gains are consistently observed across multiple datasets and architectural frameworks, highlighting the method's potential usability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a method to improve adversarial robustness on long-tailed distributions. The primary contributions include a two step adversarial training paradigm which involve, (i) adversarially training a teacher model on a balanced sub-dataset obtained from the original unbalanced dataset, and (ii) adversarially training a student model on the unbalanced original dataset using knowledge distillation from the teacher model and an additional balanced softmax loss. Another notable contribution is a theoretical analysis demonstrating how traditional adversarial training methods fail to improve robust accuracy of tail classes even when compared to non-adversarially trained methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper does not sufficiently explain why their proposed method outperforms existing AT methods focused on long-tailed distributions. Particularly, why should Knowledge Distillation be a good choice for handling adversarial-robustness on long-tailed datasets?\n\n- The incorporation of knowledge distillation (KD) and balanced softmax loss appears somewhat arbitrary. E.g., KD has been already explored for training of long-tailed datasets [1,2,3]; balanced softmax has been used in AT for long-tailed data [4,5]; KD for AT is explored in [6]. However, I agree that KD is not used for all AT and long-tailed datasets simultaneously. Hence, I would like to get a better justification of the proposed method for clarity.\n\n- The absence of a dedicated subsection that develops a theoretical or intuitive motivation for the proposed method diminishes the overall impact and comprehensibility of the paper. Providing a more thorough theoretical framework or intuitive explanations would significantly strengthen the paper’s contributions and clarity.\n\n- The proof of Lemma 1 in Appendix A needs to be a bit more rigorous and show how having equal weights minimizes the variance of $\\sum_{k=1}^n w_k x_k + b$ using Cauchy-Schwartz, and hence minimises the natural error.\n\n- Some inconsistency in the naming of the balanced dataset, throughout the paper, but specifically on page 6, line 317 onwards, notation shifts from D_b to D_B.\n\n\n \n\nOverall, the paper presents strong experimental results and provides sound theoretical analysis to motivate the original problem statement—adversarial training for long-tailed distributions. Therefore, I provide a score of 6 (Weak Accept). If the authors address the concerns outlined in the Weakness section, I would be willing to raise my score.\n\n \n\n[1] T. Li, L. Wang, and G. Wu, “Self supervision to distillation for long-tailed visual recognition,” in IEEE International Conference on Computer Vision, 2021, pp. 630–639.\n\n[2] Y.-Y. He, J. Wu, and X.-S. Wei, “Distilling virtual examples for long-tailed recognition,” in IEEE International Conference on Computer Vision, 2021, pp. 235–244.\n\n[3] H. Rangwani, P. Mondal, M. Mishra, A. R. Asokan, and R. V. Babu, “Deit-lt: Distillation strikes back for vision transformer training on long-tailed datasets,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 23396–23406.\n\n[4] TongWu, Ziwei Liu, Qingqiu Huang, YuWang, and Dahua Lin. Adversarial robustness under longtailed distribution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8659–8668, June 2021.\n\n[5] Xinli Yue, Ningping Mou, Qian Wang, and Lingchen Zhao. Revisiting adversarial training under long-tailed distributions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 24492–24501, June 2024.\n\n[6] Hongsin Lee, Seungju Cho, and Changick Kim. Indirect gradient matching for adversarial robust distillation. arXiv preprint arXiv:2312.03286, 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Mention which dataset is used for Figure 3. \n\nA comparison with balanced datasets to determine where adversarial training improves the performance of adversarial tests could be included in Figure 3. \n\nIn Figure 3, at IR =1, explain why both natural and adversarial training produces the same test results on the adversarial test data. \n\nExplain what loss function is used for $L_{KD}$ in Algorithms 1were used." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The two-phase training methodology uses the \"self-distillation\" approach, where phase one is trained on balanced datasets, and this trained model is used for knowledge distillation in the second phase to make models robust on long-trailed datasets. The performance of this approach is shown to be better performance. That is, the results shown surpass the mentioned SOTA results on RestNet18." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper's main contribution is the two-phase adversarial training of deep neural networks (ResNet-18) on CIFAR 10/100 and TynyImageNet datasets. Phase one is training on a Balanced dataset (e.g., 30 epochs), and Phase two is the Knowledge distillation on the long-tailed imbalanced dataset to improve the robustness of the model on a long-tailed imbalanced dataset. \n\nFor this contribution, the paper presents a theoretical analysis of Theorem 1, which says robustness training will further impact (worsen) the accuracy of models on long-tailed datasets. Most of the iterative analysis is in the part of the Appendix rather than the main part of the paper. The proof of Corollary indicates that it is trivial according to equations 4 and 5, which appears to indicate that the readers should figure it out themselves. \n\nThe main results show improvement compared to some SOTA algorithms. The author's results are shown to be the best in all cases. A typical training set was used for these results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Explain why the proof of the theorem is not generic for any iterative learning processes. Also, a step-by-step explanation of how the weight values affect robustness accuracy in the proof is required. \n\nInclude a visual or mathematical explanation connecting Figure 2 and Theorem 1 to clarify their relationship. \n\nThe definition of variables $\\Phi$ and $r$ in Equations 4 and 5 are required. \n\nFor reproducibility, the authors should indicate the availability of codes in the experimentation section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024longtailed,\ntitle={Long-tailed Adversarial Training with Self-Distillation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vM94dZiqx4},\nnote={under review}\n}" }, "abstract": { "value": "Adversarial training significantly enhances adversarial robustness, yet superior performance is predominantly achieved on balanced datasets.\n Addressing adversarial robustness in the context of unbalanced or long-tailed distributions is considerably more challenging, mainly due to the scarcity of tail data instances. \n Previous research on adversarial robustness within long-tailed distributions has primarily focused on combining traditional long-tailed natural training with existing adversarial robustness methods.\n In this study, we provide an in-depth analysis for the challenge that adversarial training struggles to achieve high performance on tail classes in long-tailed distributions.\n Furthermore, we propose a simple yet effective solution to advance adversarial robustness on long-tailed distributions through a novel self-distillation technique.\n Specifically, this approach leverages a balanced self-teacher model, which is trained using a balanced dataset sampled from the original long-tailed dataset.\nOur extensive experiments demonstrate state-of-the-art performance in both clean and robust accuracy for long-tailed adversarial robustness, with significant improvements in tail class performance on various datasets.\nWe improve the accuracy against PGD attacks for tail classes by 20.3, 7.1, and 3.8 percentage points on CIFAR-10, CIFAR-100, and Tiny-ImageNet, respectively, while achieving the highest robust accuracy." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Adversarial Robustness", "Adversarial Training", "Long-Tail Distribution Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f3bb647d3019245793e2be2b6859717a2ea46864.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/9def3400181d10da10412d49e94bff87d8d4c1e8.zip" }, "title": { "value": "Long-tailed Adversarial Training with Self-Distillation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vMA0ATykNU
LSTR: Long-Short Range Aggregation for Trajectory Prediction at Intersection Scenarios
main
Active
motion prediction;autonomous driving;path_planning
applications to robotics, autonomy, planning
3;3;6;6
5;4;4;3
2;2;3;3
1;1;3;3
2;2;3;3
4.5
4
2.5
2
2.5
-0.707107
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1、\tLBDM is a critical component of the LSTR model, it seems that it may not significantly deviate from existing prediction methods. Could the authors articulate the distinct innovative elements of the LBDM that set it apart from existing trajectory prediction methods?\n\n2、\tThe paper mentions an experimental condition where intersection map features were disabled in the Backbone of both the LSTR model and comparative models for fairness. Why were map features disabled in the experiments, and how does this affect the model's reliance on environmental information?\n\n3、\tWhat adaptations could be made to the LSTR model to improve its performance in scenarios with a high presence of Vulnerable Road Users (VRUs)? How might the model be adapted to improve its predictive accuracy in such environments, and what additional data or model components might be necessary to achieve this?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper introduces a novel approach to trajectory prediction at complex intersections and roundabouts, which is inherently an area ripe for innovation. The Long-short Range Aggregation for Trajectory Prediction in Intersections (LSTR) model distinguishes itself by focusing on the critical elements of trajectory entry and exit points within the Macroscopic Traffic Flow Generation module. This design choice, coupled with the sequential refinement of local path planning followed by global exit point-based trajectory tuning, presents a unique methodology that deviates from mainstream approaches. The model's architecture and the strategic integration of local and global intentions exhibit a creative synthesis of existing concepts, thereby fulfilling the broad definition of originality in research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The Long-short Range Aggregation for Trajectory Prediction in Intersection (LSTR) model adeptly handles the macroscopic organization and microscopic disorder of urban intersection traffic flows. It enhances short-range motion pattern prediction through the Coherent Occupancy Prediction (COP) head, facilitating the parallel forecasting of future trajectories and effectively capturing local dynamics. The model also includes a Global Intention Inference Module (GIIM) for destination prediction and the integration of global intentions with local decisions. Emphasizing its superiority over map-prior algorithms through integrating local interaction modeling with global intention prediction, LSTR demonstrates significant performance improvements on datasets, with a notable 4.5 improvement in b-minFDE6 on inD and a 4.2 improvement in minADE6 on rounD over the next best method. These results underscore LSTR's enhanced accuracy in trajectory prediction for complex intersection scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1)The approach of utilizing COP in conjunction with self-attention within the LBDM for trajectory forecasting may not be considered a significant innovation. The method's reliance on historical data to directly predict future states does not present a groundbreaking advancement in the field of trajectory prediction. Subsequent iterations of the model could incorporate more complex predictive algorithms or integrate additional data sources to enhance its innovativeness and predictive accuracy.\n\n2)The paper would benefit from a detailed disclosure of the model's parameter count, which is crucial for assessing the computational efficiency and practicality of the LSTR model. A comparative analysis of parameter volume with other models would provide a clearer understanding of how the model scales and performs relative to existing solutions in the domain.\n\n3)The placement of Table 3 within the Conclusion section appears to be misplaced, which may disrupt the logical flow of the paper. A more appropriate section for this table would be one dedicated to results or discussion to maintain the coherence of the manuscript. Furthermore, the appendices should maintain a consistent referencing style for tables and figures to uphold the scholarly standards of the paper. The use of specific table references, such as \"Table 1,\" should be adopted uniformly to improve the academic rigor and presentation quality of the work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. See weaknesses.\n\n2. What are the parameter size and inference latency of LSTR compared to the baselines?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper highlights the two core challenges of trajectory prediction in intersection scenarios: local interaction and global intention.\n\n2. The proposed LSTR surpasses state-of-the-art methods in intersection scenarios. Ablations demonstrate that the major modules in LSTR: the Local Behavior Decision Module (LBDM), the Global Intention Inference Module (GIIM) and a final decoder are effective.\n\n3. The paper is easy to follow, with a clear problem definition, equations, and figures for enhancing the reader's understanding." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper highlights the two core challenges of intersection trajectory prediction: local interaction and global intention. Correspondingly, the paper proposes a framework, LSTR, which consists of the Local Behavior Decision Module (LBDM), the Global Intention Inference Module (GIIM), and a final decoder to overcome these challenges. The proposed LSTR surpasses state-of-the-art methods in intersection scenarios; ablations demonstrate the effectiveness of the proposed modules." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The motivation and contributions of this paper are unclear to me. The proposed LSTR framework integrates several existing techniques, such as encoders in MTR [1] and HiVT [2], and the offset predictor in QCNet [3], except for pattern re-featuring and map filtering in GIIM, which are described only briefly. Note that papers [1] and [2] aim to address challenges of local interaction and global intention in general scenarios. My major concerns are: (1) Why do intersection scenarios need a custom framework? (2) How do the differences between the proposed LSTR and existing methods contribute to addressing the aforementioned challenges, especially in intersection scenarios?\n\n2. The paper lacks implementation details of LSTR, such as hyperparameters like the hidden size, the number of layers in each module, and the radius used to collect information about agents and maps in scenarios.\n\n3. There are several errors in the paper: the caption for the legend in Fig. 1 is missing. There are typos, such as those on lines 71 and 282. There is also a question regarding the related works section: Does MmTransformer [4] belong to the MTR [1] series?\n\n[1] Shi et al. Motion Transformer with Global Intention Localization and Local Movement Refinement. NeurIPS 2022. \\\n[2] Zhou et al. HiVT: Hierarchical Vector Transformer for Multi-Agent Motion Prediction. CVPR 2022. \\\n[3] Zhou et al. Query-Centric Trajectory Prediction. CVPR 2023. \\\n[4] Liu et al. Multimodal Motion Prediction with Stacked Transformers. CVPR 2021." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Intro: The challenges and motivations in the introductory section are inadequately supported. While the authors claim that most trajectory prediction methods rely on high-definition maps, they fail to discuss numerous recent map-free approaches.\n\n2. The core challenges highlighted are somewhat ambiguous, lacking references to substantiate the claims. The authors seem to assert these points subjectively without justifying why prior work has not addressed these issues or why these challenges are critical to trajectory prediction. Moreover, terms like local interaction/global interaction/ and route choice pattern lack standardized definitions in this field, leading to confusion and reducing readability.\n\n3. Terms such as short-range motion patterns/long-term dynamics/anchors global intentions lack clear definitions. While anchors are used in computer vision (e.g., DETR), it’s unusual in autonomous trajectory prediction, making it hard to interpret here. It appears the authors themselves may not fully understand these terms. Using complex terminology without clear explanations can hinder the paper’s readability.\n\n4. The paper’s contributions are not sufficiently novel, as evidenced by the limited number of 2024 references, with most citations being relatively outdated. Additionally, the related work section should include map-free methods to present a comprehensive overview of the field.\n\n5. There are several formatting errors, such as misplaced quotation marks (e.g., lines 41, 52, 80, and 89) and missing spaces after numbers (lines 85, 89, 92). These errors detract from the manuscript’s professionalism.\n\n6. In the Approach section, the authors do not specify the map information being used. Moreover, the authors initially criticize other methods for over-relying on map-based information, yet they use map information in their approach, leading to inconsistency in their argument.\n\n7. The proposed methodology seems like a case of \"fitting the solution to the problem\" giving the impression of a model assembled without genuine innovation. The approach appears to be a combination of ideas from HiVT (CVPR ‘22) and QCNet (CVPR ’24), with no substantial distinction from previous works.\n\n8. The authors should clarify whether the rounD dataset includes detailed map information. As far as I recall, no such detailed maps are provided, given that data collection was done via drones.\n\n9. The experimental section states, “We use the official Argoverse competition metrics” which is confusing since the experiments were conducted on three different datasets (inD, rounD, WOMD). Why use metrics from an unrelated dataset? Are there no suitable metrics for the chosen datasets?\n\n10. Certain acronyms and technical terms lack definition, such as \"VRUs\" (Vulnerable Road Users, line 304) and formatting artefacts like the bolding of “b-minFDEk” (line 320). These should be properly introduced and formatted.\n\n11. The paper appears to be heavily reliant on LLMs for drafting. While such tools can enhance readability, phrases often seem vague or verbose, indicating a lack of manual refinement. This results in an awkward, overly complex writing style.\n\n12. The contributions claim that the model is capable of making interpretable predictions, but there is no substantial evidence or detailed explanation to support this claim, which is a significant shortcoming.\n\n13. Given my familiarity with HiVT, QCNet, and HOME, transferring these architectures to datasets like inD and RounD seems challenging. I am sceptical of the validity of the results and suggest conducting experiments on benchmark datasets like Argoverse or NuScenes to further substantiate the model’s performance.\n\n14. The baseline models used in the comparison are relatively old. More recent baselines should be included to provide a robust evaluation of the model’s performance.\n\n15. The paper lacks ablation studies and a Discussion section on the model's limitations and potential future work, both of which are essential to understanding the model’s robustness and areas for improvement.\n\n16. The qualitative analysis figures (e.g., Fig. 3) are unclear, making it difficult to distinguish ground truth from model predictions. Additionally, visualizing model performance using WOMD’s official API could provide clearer insights into performance in complex scenarios.\n\nOverall, the manuscript does not meet publication standards. Significant revisions are required to address the issues highlighted above, particularly regarding conceptual clarity, experimental thoroughness, and the novelty of contributions." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "By integrating local behavior decision-making with global intention inference, LSTR effectively captures both short-range vehicle dynamics and long-range traffic patterns, improving prediction accuracy in chaotic intersection environments. Extensive experiments on datasets like inD, rounD, and WOMD confirm LSTR’s superior performance in accurately predicting diverse motion patterns in real-world intersection scenarios." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces LSTR, a model that enhances vehicle trajectory prediction in complex urban intersections. It combines local interaction modelling with long-range intention prediction to handle chaotic intersection traffic. It adapts predictions to individual vehicle dynamics and overarching traffic flow using modules for local behaviour decisions and global intention inference." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Please refer to the Questions section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* There are some minor typos in the paper. Please correct it e.g., \"flows Zhao et al. (2023).Microscopic traffic flow\" with extra '.' and also should add extra space after it. \n* What is the criteria for selecting subset of WOMD?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* Overall the paper is well-written and easy to follow. \n* The proposed method study an interesting while relatively less explored problem for how to model the global/local intents under no or sparsely annotated lane information. \n* The author benchmarked the performance on multiple datasets including the inD,rounD, and WOMD." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces LSTR, a trajectory prediction model designed for complex urban intersections. . LSTR combines global intention inference with local interaction modeling, capturing diverse motion patterns." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The main benchmark on WOMD only compared with the MTRv3. More comparisons with other SOTA methods in WOMD are needed to justify the technical improvement especially given MTRv3's result is based on author's reimplementation. More open-sourced methods can be used for more fair comparison. \n* In the motivation, the author claimed that existing methods would fail in intersections without detailed annotations. However, in the experiment section, it seems all the selected intersections/dataset have detailed map annotations as shown in Figure 3 and the proposed method would also consume lane graph as shown in Figure 2. Key questions like whether the proposed method would outperform other SOTA methods under scenarios with no detailed map annotations are still not fully justified." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a novel Long-short Range aggregation for trajectory prediction at intersection scenarios" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024lstr,\ntitle={{LSTR}: Long-Short Range Aggregation for Trajectory Prediction at Intersection Scenarios},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vMA0ATykNU},\nnote={under review}\n}" }, "abstract": { "value": "Trajectory prediction is crucial for practical applications, encompassing navigation for autonomous vehicles and the implementation of safety systems based on the Internet of Vehicles (IoV). Most existing methods significantly rely on comprehensive map information, employing robust rule constraints to incrementally predict trajectories within the driver's local decision-making context. However, in environments characterized by weak rule enforcement, such as urban intersections, these approaches neglect the disparity between the driver's overarching intentions and current behaviors.Recognizing the characteristics of intersection traffic flow—macroscopically organized yet microscopically disordered, exhibiting highly heterogeneous conditions—this paper presents a novel model termed Long-short Range Aggregation for Trajectory Prediction in Intersections (LSTR). This model anchors the vehicle's local decision-making process to long-range intentions. Specifically, LSTR predicts the vehicle's destination via a global intention inference module and models its long-range driving intentions through clustering to extract macroscopic traffic flow patterns. This long-range intention subsequently informs the short-range local interaction behaviors captured by the local behavior decision module. Ultimately, the fused features from these two modules are analyzed using a multi-modal decoder to interpret the various motion patterns, resulting in the trajectory prediction outcomes.We rigorously validate the proposed framework across multiple intersection scenarios utilizing real-world datasets, including inD, roundD, and a subset of WOMD. Experimental results demonstrate that our model outperforms numerous benchmarks without relying on additional information such as HD maps of intersections." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "motion prediction", "autonomous driving", "path_planning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5dd0a185407324b384522aebcd842007ee4704bb.pdf" }, "presentation": null, "primary_area": { "value": "applications to robotics, autonomy, planning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "LSTR: Long-Short Range Aggregation for Trajectory Prediction at Intersection Scenarios" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vMIVqlEWRw
Robin: a Suite of Multi-Scale Vision-Language Models and the CHIRP Evaluation Benchmark
main
Active
Vision-Language Models;Benchmarks;Scalling Suites
datasets and benchmarks
3;3;5
4;3;2
2;1;3
2;1;3
2;2;3
3.666667
3
2
2
2.333333
-0.866025
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See Weaknesses.\n\nIn order to improve the score, I would like to see a convincing argument (probably supported by experimental results) indicating that there are no major issues with the training of the VLLM scaling ladder." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper is rife with interesting remarks and observations on benchmarking and evaluation of VLLMs. In a time of explosion of VLLM-related research, these observation could have a particularly significant impact on the research community.\n* Benchmark-related aspects of the work appear sound.\n* The work is well-presented; the authors take the reader of a bit of journey through their thought process, introducing various hypotheses and explaining how they tested them.\n* The presented work paves the road towards better benchmarking of VLLMs" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the correlation between VLLM log-likelihood, popular benchmark performance and the human-perceived model quality. By studying the scaling ladder of VLLMs (changing LLM and vision encoder sizes) the authors show that some trends (e.g model quality improvement with larger vision encoders) are not clearly discernible using existing benchmarks, and propose a new benchmark called CHIRP along with a pair-wise model comparison questioner that is able to highlight these trends better." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Major**\n* My main concern about the paper lies with its experimental (model training) part. The graphs in Figure 1 are a little bit suspicious as they do not demonstrate a monotonic increase in likelihood as model size grows. This hints at some training instability or other issue (this is also my interpretation of some of the outliers in Figure 10 in the Appendix). Upon reviewing the supplement I see that the authors used the same hyper-parameters for all model (Section A2). This is likely suboptimal. Depending on how these hyper-parameters were chosen (i.e. for which VE + LLM pair size?), certain model quality trends may be masked or exacerbated.\n\nI would like to see the authors demonstrate that their observations are robust to this choice of hyper-parameters. This can take the form of performing a larger hyper-parameter sweep for some of the outlier models (e.g. ViT-B and ViT-g vision encoders and various LLMs) and showing that the log-likelihood curves still demonstrate the above non-monotonic trends.\n\n* Related to this, I was also surprised by the observation that there exists \"ideal\" VE-LLM size combinations (line 495), and suspect that this too could be a consequence of how model hyper-parameters were chosen.\n\n* At a high-level, while find the paper rich with valuable observation and analyses, I missed the bigger picture. What is the authors' recommendation for VLLM benchmarking given all that they've found? Should we be training largest models if we aim to improve model quality (as perceived by human evaluators), can we rely on other existing VLLMs when computing Elo rankings on the CHIRP benchmark? Is CHIRP the best benchmark we have right now for measuring model quality improvements? Etc.\n\n**Minor**\n* Figure 5 does not show ELO curve samples with low transparency as the caption suggests.\n* As I understood, the authors plan to release their trained VLLMs. In this case, it would be great to also see model cards detailing the data used for training the said models; as well as bias / toxicity analyses. While I do find this to be a major point, it's listed under \"Minor\" as I do not believe it should be blocking the paper publication and rather be a prerequisite for responsible model release." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What is the main scientific finding that this manuscript wants to communicate to the community? What added value has the scale of the model and of the visual encoder?\n2. Why not performing a comparison with existing networks / on existing benchmarks?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. There is some work behind this manuscript. I believe that the authors have contributed with a non-trivial effort in this paper, since many parts of the proposed analysis require effort, such as the training of the VLMs, and the design of the benchmark questions. \n\n2. The theme treated is important. It is challenging to find the perfect way to evaluate a VLM, and I believe that the focus on the effectiveness of different ways of prompting the model is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes one major contribution: a dataset for the evaluation of VLMs, named CHIRP, focused on open-ended Q&A. The main goal of CHIRP is to solve the inconsistencies emerging in the analysis proposed between the long-form and short-form answers produced by a VLM in a Q&A task. More in detail, in the paper it is stated that VLMs have significant differences if 1) prompted to reply shortly to an input question requiring visual interpretation, or 2) left free of reasoning and provide a question as long as they want. A long preliminary part of the paper focuses on the presentation of Robin, a collection of VLM based on LLAVA that focuses on the ablation of the model sizes and the visual encoder. The paper's organization is as follows: first, Robin is presented and results are evaluated on existing benchmarks. Then, shortcomings of existing benchmarks are proposed. Finally, CHIRP is introduced and used for the evaluation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. My biggest problem with this manuscript is that I believe that is not enough general in the observations that led to the analysis and to the introduction of the benchmark, which makes the entire analysis limited in scope. To clarify, the results are only on the proposed suite of VLM and motivated only by the performance on the proposed suite of VLMs. Although it is based on existing models, we have no guarantees that what is going on in the paper is an actual problem for VLM or it is just a consequence of the design choices of the authors. Instead of introducing a suite of foundation models for highlighting a problem, and then evaluating the proposed dataset on the trained models, I think it would have been more scientifically solid to test how existing VLMs (such as LLAVA, Chameleon, etc) would perform with LvSR and then see if the problem stands. Right now, the entire work seems designed for a problem that we do not know it exists for widely used networks, and it could be constrained to the authors' setup.\n2. Similarly, how do existing models perform on CHIRP? We cannot judge one benchmark if the adequate benchmarking of existing VLMs is not performed, and does not reveal novel insights on the capabilities and limitations of existing models.\n3. The motivation behind the benchmark is not very clear to me. While the authors claim that there are shortcomings with the evaluation with short answers or single words, which I agree with, there is an extensive literature on LLM evaluators [1], partially cited in the paper, which is taking into account other aspects rather than simply using exact string matching. Why the CHIRP dataset is different than just using LLM evaluators on the replies of the LLM for existing questions in specific benchmarks for some of the categories introduced, such as [2]? Additionally, why this should help in solving the problems with LvSR?\n4. The presentation of the manuscript is very convoluted. Presenting first the suite of language models, then the problems of the benchmark, and finally the benchmark, makes the reading quite challenging and it is not clear what's the main point that the authors want to make. Is it related to the problems in the ground truth in existing benchmarks? Is it related to LvSR? Is it saying that the vision encoder does not impact much in the learning of the network?\n\n[1] Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena, NeurIPS 2023 \n[2] VIVA: A Benchmark for Vision-Grounded Decision-Making with Human Values, EMNLP 2024" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Questions and detailed comments:\n\n* The paper reports per-datasets results which is quite far-away from SoTa. For example, compare Table 5 of this paper with the published LLaVa results (See Table 3 and Table 4). This suggests that the models in Figure 1 are not optimally trained. Hence it is quite difficult to trust the results from Figure 1.\n* The paper states that roughly 9% of GQA and 5% of TextVQA is incorrect and says that 3% difference in accuracy are noise. But all models are likely to get these questions wrong, if the metric in consideration is a perfect match between the generation output and ground truth. Thus this only tells us the upper bound of accuracy a model can achieve. \n* Figure 2 shows that bigger vision encoders do not lead to better performance on benchmarks. Based on this result, the paper hypothesizes that there are differences between the different vision encoder sizes but the benchmarks are unable to capture it. But Figure 1 shows that this property is also seen during pretraining with the pretraining loss. So based on Figure 1, one cannot attribute the observation that “bigger vision encoders do not lead to better performance” solely to benchmarks.\n* From Figure 4 left column, even though the automated evaluation is much lower than the human evaluation, they follow the same trend, that is they are correlated. If anything, this plot suggests that automated evaluation is as good as human evaluation in assessing model trends. Similar things can be said while comparing LLM-based evaluation and automated evaluation in the second plot from the left.\n* The human grading plot is omitted in Figure 4, except for the first column. So one cannot assess how the human evaluations compare to automated evaluations when encoder size is varied.\n* Figure 4 right shows that VLM’s predict that close to 100% accuracy while according to humans, the upper bound is more like 50%. This discrepancy indicates that the VLMs, at least within the context of the authors' methodology, are not suitable for use as an oracle for this particular task.\n* The authors present a new benchmark called CHIRP. But I was not able to find concrete recommendations on how they plan to allow researchers to evaluate new models using the benchmark since all the reported results are in ELO score and require pairwise comparisons.\nGiven the results from Figure 4 right, the reader also becomes unsure about the efficacy of the GPT-4V and LLaVa based results in this section.\n* I was not able to find a concrete explanation on why the CHIRP benchmark is better than other benchmarks. Comparing Fig 6 left Survey and Fig 4 left Human grading, both capture scaling properties. Similarly, Fig 6 second column and Fig 4 second column, seem to also capture scaling properties with vision encoder size. Thus more justification is required for the CHIRP benchmark.\n* Table 2 shows that GPT-4V decision making is the most correlated with humans. However, Table 3 shows that GPT-4V is the most different from humans while comparing model size agreement by method. How is this the case?\n* L500: Ultimately, both human and AI evaluations show that performance on CHIRP correlates with loss more than other evaluation tasks. The paper states this but unfortunately no evidence is provided.\n\nMinor comments\n\n* Are the plots in Figure 1 from the first stage or the second stage of LLaVa pretraining?.\n* Please decompose Figure 3 into separate plots. It is super difficult to read." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The overall idea of careful comparisons of VLM’s and introducing benchmarks that can capture VLM nuances that current models cannot is quite interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper is of two parts. The first part trains a suite of Visual Language models combining LLM (Pythia) with a vision encoder (CLIP) across different scales. The major claim is that there is a slight relationship between LLM size and model performance while there is no clear relationship between vision encoder size and model performance. The paper then looks at different nuisance factors in terms of model answering: long form vs short form, wrong formatting and multiple possibilities for ground truth.\n\nThe second part introduces a new VLM benchmark (Robin) based on human + LLM captions and DALLe-3 based images. They perform pairwise comparisons of different models based on human preferences, GPT-4v and LLaVa. They notice that only AI evaluators correlate performance with VE size but not GPT-4v and LLaVa. The paper then studies properties such as agreement and contradictions between human raters and models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The reviewer was not able to find enough justification based on the experiments that current downstream benchmarks cannot capture scaling behaviour. This was based on both the pretraining loss plots (Fig 1) and experiments retaining the same datasets but with different metrics (Fig 4). \nFinally, the last paper feels more like a study evaluating multiple Robin models on the CHIRP benchmark. There are, however, no concrete recommendations on how a researcher can easily evaluate newly trained VLM models on these benchmarks and what nuances these benchmarks can capture as opposed to others.\nSee below for detailed feedback." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We trained a suite of models in order to analyze the robustness of current benchmarks over scale and created a benchmark to fill a major gap we identified." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024robin,\ntitle={Robin: a Suite of Multi-Scale Vision-Language Models and the {CHIRP} Evaluation Benchmark},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vMIVqlEWRw},\nnote={under review}\n}" }, "abstract": { "value": "The proliferation of Vision-Language Models (VLMs) in the past several years calls for rigorous and comprehensive evaluation methods and benchmarks. This work analyzes existing VLM evaluation techniques, including automated metrics, AI-based assessments, and human evaluations across diverse tasks. We first introduce Robin - a novel suite of VLMs that we built by combining Large Language Models (LLMs) and Vision Encoders (VEs) at multiple scales, and use Robin to identify shortcomings of current evaluation approaches across scales. Next, to overcome the identified limitations, we introduce CHIRP - a new long form response benchmark we developed for more robust and complete VLM evaluation. We provide open access to the Robin training code, model suite, and CHIRP benchmark to promote reproducibility and advance VLM research." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Vision-Language Models", "Benchmarks", "Scalling Suites" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e162a10ada6635b57414107981affff2ff79fc3a.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Robin: a Suite of Multi-Scale Vision-Language Models and the CHIRP Evaluation Benchmark" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vNATZfmY6R
KiVA: Kid-inspired Visual Analogies for Testing Large Multimodal Models
main
Active
large multimodal models;analogical reasoning;cognition;developmental psychology
foundation or frontier models, including LLMs
5;5;8;8
3;4;4;4
2;3;3;4
2;2;4;4
3;2;4;4
6.5
3.75
3
3
3.25
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Did the authors experiment with other visual analogy domains? If so, what were the results?\n\n2. Could LLMs perform better on these tasks if the image contents were translated into text descriptions? Would textual encoding support LMMs in achieving higher accuracy in detecting “how” changes occurred and extrapolating rules?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The dataset, inspired by developmental psychology, is unique in its simplicity, enabling assessments that even young children can complete. Its three-stage structure offers a clear breakdown of different analogical reasoning abilities in LMMs versus humans.\n \n2. Extensive experimentation demonstrates specific strengths and weaknesses of LMMs, providing critical insights. For example, while models can recognize \"what\" changed in an image, they struggle to quantify \"how\" it changed and to generalize this rule to new objects (e.g., recognizing transformations in rotation or reflection)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "KiVA is a new benchmark for assessing visual analogical reasoning in large multimodal models (LMMs) by comparing their performance to human adults and children. Inspired by developmental psychology, KiVA includes 1,400 visual transformations of everyday objects and tests models on identifying changes, quantifying them, and applying inferred rules to new scenarios. Experiments with models like GPT-4V, LLaVA-1.5, and MANTIS reveal that while LMMs can recognize \"what\" has changed, they struggle with quantifying \"how\" it changed and generalizing these rules, especially for complex tasks like rotation and reflection, highlighting a significant gap between LMMs and human reasoning abilities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The selection of visual analogy domains, while simple and fundamental, lacks sufficient justification regarding why these specific transformations were chosen over others. Intuitively, additional characteristics—such as edibility, danger, sharpness, and liveliness—are also essential features humans consider. For more complex natural scenes, it’s unclear whether the selected features are more significant than others. The authors can provide further rationale for choosing these five factors or discuss the broader context of feature selection in visual analogy.\n\n2. The discussion on how to improve LMM performance on these tasks is limited. It’s challenging to determine whether the low performance is due to limitations in analogical reasoning or to information loss during the initial perception stage. I’m curious whether translating the visual information into text would improve LMM performance on the task, as models might process textual representations more effectively. Investigating or discussing whether such an experiment might clarify whether perceptual or reasoning stages primarily limit LMM performance could help.\n\n3. **Typographical errors**: Line 39 is missing a period after \"reasoning\"; text in Figure 1 is obscured by images, particularly in the percentage labels; Figure 8 is not referenced in the corresponding paragraph." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- I am not sure that the descriptions of the instructions / prompts in the appendix are accurate. Specifically, the instructions in A2 (prompting models and human adults) read the same as the instructions in A4 (prompting models through reflection and self-critique).\n- line 168 \"three=year=old\" should be hyphenated with \"-\"\n- \"No Change\" was given as an option. Were no-change trials also included?\n- \"If they fail to specify the change, any extrapolation would likely be incorrect\". --> is this true? conditional data?\n- line 279 \"handpicked by Developmental Psychology experts...\" what does this mean? How could we evaluate the veracity of this statement?\n- did human participants practice the tasks (with feedback)? I could imagine this could particularly matter for children.\n- related work: A recent relevant paper on Bongard problems may also point towards possible ways to improve analogical reasoning: Depeweg, S., Rothkopf, C. A., & Jäkel, F. (2024). Solving Bongard Problems With a Visual Language and Pragmatic Constraints. Cognitive Science, 48(5), e13432. https://doi.org/10.1111/cogs.13432" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- Breaking down visual analogies into these reasoning steps helps to highlight exactly where humans and models fail\n- Presenting both adult and child data on the benchmark questions is valuable. The human studies appear well conducted.\n- Various additional common steps are evaluated to improve model performance, which interestingly do not seem to change model performance greatly.\n- Examination of model response consistency helps to unpack where model decisions go wrong." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Overall this is a strong paper. The contribution of the benchmark is interesting and well-designed, and both adult and child data is provided as baseline for comparison. The paper is well written." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The rotation task does not appear to assess 3D rotation, which is the main focus of studies of mental rotation from cognitive psychology. As far as I can tell, these rotation tasks could in principle be solved by rotating the image plane (e.g. pixels, monitor, or the participant's head). Since you have 3D objects, why not add real 3D rotations (where a \"hidden\" part of the object due to self-occlusion is revealed)? This task would further strengthen the challenge of the benchmark.\n2. Additional analysis of conditional performance would add further understanding to model performance. For example, line 262 states \"If they fail to identify the specific change, any attempt at extrapolation would likely be incorrect\". Is that true? I think the authors have data to assess this: is the accuracy on extrapolation different depending whether you condition on specification correctness?\n3. Similarly to point (2), unless I have missed it, the authors do not appear to evaluate model performance on specification and extrapolation if the model is first given the answer to the previous step. For example: tell the model that the object increased in size, then instruct the model to apply the same transformation to a new image. Can the models at least do this? If not, this indicates and even more fundamental problem." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "Studies involved human adults and children. Adults were paid $12 an hour plus a small amount for correct responses on a test. Children ages 4-7 took a small test (10 multiple questions) and were rewarded with stickers." }, "flag_for_ethics_review": { "value": [ "Yes, Responsible research practice (e.g., human subjects, data release)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Were there any noticeable patterns in tasks that LMMs failed but humans succeeded, outside of category (e.g. any specific objects?) when the models are consistent? Likewise for humans?\n\nHow were outliers handled in your human studies, if at all?\n\nThe child participants span a wide age range that covers important developmental milestones relevant to this task. Despite the small number of subjects, do you see correlations with age, especially for your specific test categories?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The writing is clear and well-structured. Figures also clearly demonstrate the test tasks and their results. The authors introduce a novel and well-motivated benchmark for studying LMM capabilities. The test is grounded by using real-world objects, and draws inspiration from developmental psychology. The three stages introduced by the authors help clarify where LMMs have shortcomings. The experimental design is rigorous, and validated with human studies of both children and adults. The analysis is detailed and includes both error and consistency patterns. The results reveal important limitations in LMM visual reasoning capabilities that have been previously underexplored." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors describe KiVA, a benchmark for evaluating visual analogical reasoning in LMMs, using image pairs of common household objects before and after several different transformations. They test several LMM models as well as children and adults on three different tasks: what changed, how it changed, and applying the rule to a new object. Five transformations are used: color changes, size changes, rotation, reflection, and number changes. They find that each progressive task is more difficult and less consistent for LMMs, and strategies like prompt engineering do not help." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The discussion could be expanded with discussion of why models tend to fail at certain transformations, outside of investigating their consistency. While the paper mentions objects were \"handpicked by developmental psychology experts\", it doesn't detail the selection criteria or validation process. There's no reported validation that the transformations are equally discriminable across categories. For instance, an example image shows a die face with five dots - almost completely symmetric under 90 degree rotations, one of the allowed transformations. Would the dataset include that difficult a question, or nearly-as-difficult ones with minor visual changes across transformations? Lacking space in the 10-page limit, several experiments are described extraordinarily briefly along with their results, with real methods left to the appendix. One such description - “verbal questions facilitate” - lacks useful data, only reporting p values without actual quantities of the results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.I still have questions on why in the 3-stage evaluation procedure, even if the first question is answered incorrectly, the third question can be answered correctly. The authors did not provide an explanation on how this could occur, and what it could mean." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.KiVA introduces realistic, grounded visual tasks inspired by developmental psychology, aimed at evaluating analogical reasoning in ways similar to early human cognitive development.\n2.The authors broke down their evaluation and proposed a 3-stage evaluation procedure to examine the different steps involved in analogical reasoning to determine which steps a model can perform and where it may fail.\n3.Results from KiVA demonstrate that state-of-the-art large multimodal models, i.e., GPT-4V OpenAI (2023a), LLaVA-1.5 (Liu et al., 2024) and MANTIS (Jiang et al., 2024), cannot solve visual analogies like humans can. The authors discovered that While LMMs can detect some object transformations, they cannot make extrapolations about those transformations to new objects." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work explores visual analogical reasoning in large multimodal models (LMMs) and compares their performance to that of human adults and children. Inspired by developmental psychology, the authors developed a benchmark, KiVA, with 1,400 visual transformations of everyday objects to test and assess models' capabilities in visual analogical reasoning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.The KiVA test only include changes like orientation, changes in numbers, changes in size of the objects, and reflection while neglecting other basic transforms common in daily life like stretching(which can also be solved by young children).\n2.LMMs today still rely heavily on the language model backbone. In other words their ability to assess the images heavily rely on what information the image encoder can provide to the backbone. Some encoders tend to lose information like size or rotation. Blaming the poor performance of the inference only on the analogy reasoning ability of the models might not be fair." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We present a benchmark that closes a critical gap in current benchmarks for foundational models - visual analogical reasoning, which even young children can do but models perform poorly in." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024kiva,\ntitle={Ki{VA}: Kid-inspired Visual Analogies for Testing Large Multimodal Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vNATZfmY6R},\nnote={under review}\n}" }, "abstract": { "value": "This paper investigates visual analogical reasoning in large multimodal models (LMMs) compared to human adults and children. A “visual analogy” is an abstract rule inferred from one image and applied to another. \nWhile benchmarks exist for testing visual reasoning in LMMs, they require advanced skills and omit basic visual analogies that even young children can make. Inspired by developmental psychology, we propose a new benchmark of 1,400 visual transformations of everyday objects to test LMMs on visual analogical reasoning and compare them to children and adults. We structure the evaluation into three stages: identifying what changed (e.g., color, number, etc.), how it changed (e.g., added one object), and applying the rule to new scenarios. Our findings show that while models like GPT-4V, LLaVA-1.5, and MANTIS identify the “what” effectively, they struggle with quantifying the “how” and extrapolating this rule to new objects. In contrast, children and adults exhibit much stronger analogical reasoning at all three stages. Additionally, the strongest tested model, GPT-4V, performs better in tasks involving simple surface-level visual attributes like color and size, correlating with quicker human adult response times. Conversely, more complex tasks such as number, rotation, and reflection, which necessitate extensive cognitive processing and understanding of extrinsic spatial properties in the physical world, present more significant challenges. Altogether, these findings highlight the limitations of training models on data that primarily consists of 2D images and text." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "large multimodal models", "analogical reasoning", "cognition", "developmental psychology" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0dee10c823b60cdf91847660448a55bfe246d94e.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "KiVA: Kid-inspired Visual Analogies for Testing Large Multimodal Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vNGv3dJATp
Towards Understanding Memory buffer based Continual Learning
main
Active
continual learning;memory;catastrophic forgetting;generalization
transfer learning, meta learning, and lifelong learning
3;3;3;6
4;4;2;3
2;2;1;3
2;2;1;3
1;2;1;2
3.75
3.25
2
2
1.5
-0.174078
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. How is a “task” being mathematically defined? Why was this specific definition chosen?\n2. How is “task similarity” being defined? Why was this specific definition chosen?\n3. It would be helpful to see experimental results similar to those with a deep neural network from Lin et al. to enhance the theoretical findings in the present paper.\n4. It would be helpful to have plain English explanations of the “Remarks” throughout the paper to better understand the findings.\n5. It would be helpful to have more context about the study from Lin et al. so that the reader does not have to read excerpts from Lin et al.’s paper to understand the present paper.\n6. In the Introduction, it is mentioned that the challenges for the present study consist of “the coupling of the data matrix and label vector”. What exactly does this mean?\n7. What exactly was the setup used to produce the plots in Figure 1? Were Gaussians sampled multiple times to create different “tasks” and then the results averaged for the plots?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "**Originality:** Theoretically analyzing the error bounds for forgetting and generalization in continual learning (overparameterized linear) models that use a memory buffer is novel. To the best of my knowledge, there do not currently exist any studies that examine this problem. The most related work is the ICML 2023 paper by Lin et al.\n\n**Significance:** It is useful to study these error bounds for continual learning models that use memory buffers since this has become a predominant strategy for mitigating catastrophic forgetting in experimental works in the field. Theoretical bounds pave the way for a better understanding of experimental findings in the field. Although the error bounds are only derived for overparameterized linear models, future work could explore these bounds for other models like deep neural networks or non-linear models more broadly.\n\n**Quality & Clarity:** While providing theoretical error bounds for the forgetting and generalization of memory-based continual learning methods is useful, the paper is lacking clarity and is difficult to follow. For example, there are several design choices that do not have clear justifications and the mathematical notation is inconsistent, which is expanded upon in the “Weaknesses” section below." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper seeks to theoretically analyze the impact of memory on continual learning in overparameterized linear models. To do this, the paper starts from error terms for forgetting and generalization defined in the ICML 2023 paper by Lin et al. for continual learning without a memory buffer. The present paper then extends these error terms for continual learning methods that use one of two memory buffers: 1) a buffer created using reservoir sampling (a partial rehearsal buffer) or 2) a buffer that stores all previous samples (a full rehearsal buffer).\nThe paper claims that these error terms lead to three conclusions:\n1) “A larger memory buffer must be paired with a larger model to reduce forgetting effectively”;\n2) “A sufficiently large model can lead to zero forgetting”;\n3) “A larger memory buffer may improve generalization when tasks are highly similar but can degrade generalization when tasks are highly dissimilar.”" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My biggest issue with the paper is the lack of clarity, which makes it difficult to evaluate the correctness and impact of the presented results. I will expand on several points related to this next.\n\n- It is unclear what the meaning of a “task” is in the paper. The data vectors are said to be sampled from Normal(0,1), but then what is changing from one task to another? This is important for understanding the results. Moreover, the notion of “task similarity” is mentioned several times, including in the concluding findings from the theoretical analysis. However, “task similarity” is never mathematically defined. Are we assuming some bound on the difference between two matrices as their “similarity”? If so, this is not mentioned anywhere in the paper and remains unclear.\n\n- The paper could benefit from more consistent mathematical notation. For example, in Section 3.1, W_T is defined as the set of linear ground truth vectors of all T tasks, but then the vectors of W_T are used to produce the output y=X^T w_t. However, y appears to be a prediction, not a ground truth, so this is unclear. Moreover, in Section 3.2.1, it is stated that “the memory buffer stores m_{t-1} samples” and also that we have “\\hat{m}_{t-1}” tasks. The inconsistency of variables makes the math extremely difficult to follow. These are just a few examples.\n\n- Another benefit could come from plain English explanations of theoretical results. The Remarks in the paper are useful for understanding the theoretical results, however, they are written in terms of math and it is unclear what their implications are for the study. These plain English explanations could help the reader better understand the lemmas, theorems, and remarks in the context of the broader continual learning field.\n\n- There are several justifications for design choices missing. For example, there is an assumption that the number of samples for each task is equal (Assumption 1). Is this just to simplify notation? How do we know these results will hold for the more general case when each task does not contain the same number of samples? In Section 3.2.1, it is stated that “When t \\geq 2, the memory buffer stores m_{t-1} samples for each of the previous t-1 tasks and 1 sample for each of the \\hat{m}_{t-1} distinct tasks from the previous t-1 tasks.” Does this mean the buffer only stores 1 sample per task? If so, this is a very strong assumption and it is unclear why this was chosen.\n\nIn the study by Lin et al., there were empirical results with deep neural networks that demonstrated similar findings to the theoretical analyses performed for the overparameterized linear model. This was powerful to demonstrate that their theoretical results might hold for more complex models (like non-linear neural networks). It would be helpful to see a similar type of experimental analysis in the present paper to strengthen the theoretical findings." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Besides the weakness above, I have some further questions:\n\n1. In the replay buffer, why not put samples for the same old task together? This may be convenient for analysis, but seems not standard.\n\n2. How did you handle the correlation between the replay samples and the model? For example, data stored in the replay buffer for task $t-1$ have already been seen during the training of $w_{t-1}$. When using this data to update the model at task $t$, the model $w$ should be corrected with $\\hat{D}_t$. Address this challenge is important to analyze replay-based CL, but I didn't see how this was particularly addressed in the paper.\n\n3. In section 4.1, the claims about the impact of memory size on forgetting are made only based on $F_2^2$, which seems not very rigorous. The reason is that, when changing the memory, from Theorem 1, most time the coefficient of $F_2^1$ will also change. And unlink the second that depends on task similarity, this first term cannot be very small." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The theory for replayed based CL is very limited, and this paper provides the first explicit forms of forgetting and generalization error for two replay strategies.\n\n2. Analysis are provided to understand the impact of memory." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper provides a theoretical investigation of replay based CL where each task is an overparameterized linear regression task, under two different scenarios, i.e., reservior sampling and a full replay where all previous data are stored. By deriving explicit forms of expected forgetting and generalization errors, the authors analyze the impact of memory size and model size on forgetting and generalization error, under the coupling with task similarity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. No experimental studies are provided to justify the theoretical results. In particular, it is not clear if similar phenomenons can be observed in practice with neural networks and real datasets, which challenges the usefulness and importance of the theoretical results in this paper. And it is not clear how the theoretical results can help in practice.\n\n2. It is not convincing that analyzing the full replay case is important here, as this replay strategy is barely used in practice.\n\n3. The presentation needs to be improved. For example, it is not clear what the distinct tasks are in line 172. Figure 1 also needs to be more clear. The last sentence in Remark 6 also seems questionable, as these two figures are talking about generalization error instead of forgetting." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "See weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The motivation of proposing a theoretical framework to give a deeper understanding of memory-based methods is appreciated\n- Some limitations have been rightfully addressed" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a theoretical analysis of memory-based over-parameterized linear continual learners. More specifically, the authors explicitly express the forgetting and generalization errosr when training a linear model with a MSE and memory mechanism. The authors derive various relation between memory size, dimension of the input and overall forgetting and generalization. Based on this analysis, the authors make various claim regarding the choice of model and memory size to minimize forgetting and maximum generalization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "# Weaknesses\n- The introduction is very hard to read and introduces many undefined variables such as $T, s, M_{max}, d$. The introduced challenges l.48 are unclear; therefore, the context introduction of this theoretical analysis is extremely confusing. I would advise the authors to give a more intuitive understanding of such challenges (i) and (ii). The bullet point contributions at the end of the introduction are appreciated.\n- In 3.1, I do not understand the definition of the ground truth vectors $w_i$. Ground truth vectors should be $y_t$ but here they are vectors $w_i$ multiplied to the input to give the ground truth. I imagine that what the authors actually mean is that $w_i$ is the linear learned classifier? However, in that case $y_t$ is the prediction, not the ground truth.\n- The definition of reservoir sampling is confusing. Traditionally, the probability of selecting a sample to be put in memory is $\\frac{|M|}{k}$ with $|M|$ the memory size and $k$ the stream index. Therefore when the authors write l166 \"the probability of any example from the previous $t − 1$ tasks being in the buffer is equal\", it does not correspond to reservoir sampling. The probability of being selected decreases over time.\n- In 3.1 the diag operator is not defined\n- According to 3.1, $X_t^T w_t^*$ is a scalar, however, in equation (1) it is multiplied by $M_{M_{max}}^l$ and the output is a scalar. How is that? Same remark for eq (2)\n- Table 1 is never used in the text\n- in 3.3, I believe the assumption corresponds to the features. Therefore, each element of $\\hat{X}_t$ follows an isotropic Gaussian of mean 0 and variance 1. In that sense, each element of future tasks follows the exact same distribution as previous tasks, but the label is different. This seems unrealistic as in Continual Learning the distribution very likely changes over time from one task to the other.\n- I believe the authors consider a classification problem, however the training loss in (3) is most certainly more suited for a regression problem. The authors should also discuss the limitation of studying only one specific loss function.\n- In Theorem (1) the authors assume that $d > s + M_{max} +2$. But the value of $M_{max}$ can be very large\n- $d$ is sometimes the dimension of the input (section 3.1), sometimes the number of parameters (Table 1). Even if a larger dimension implies more parameters, it would be more interesting to study the impact of over-parameterization, which does not seem to be the case in this paper.\n- How much is the overparameterized assumption important here? Could the same analysis apply otherwise? I believe this analysis makes a lot of sense when fine-tuning linear layers of pre-trained models, however this parallel is lacking in the current writing of the paper.\n- An experimental validation of the proposed analysis seems necessary to me given the large amount of assumptions made throughout the entire paper.\n\n# Typos\n- l.149 lacks the dimension of $x_t$\n- In Assumption 1, it should be $t \\in [1, T]$" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. In Section 3.2, why the memory buffer does not store the labels, and the labels are generated instead. Meanwhile, it is not clear why for reservoir sampling-based memory buffer, the labels are generated by Eq. (1).\n2. The paper should have a notation table to sum up all the notations. It is hard to follow the paper since many notations are similar. \n3. It is not clear where does the paper get the following conclusion, as stated in the abstract:\n> ... (1) a larger memory buffer must be paired with a larger model to effectively reduce forgetting;" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Analyze the effect of memory buffers in continual learning is an interesting and important problem. \n2. Theoretically shows how the memory buffer size effects forgetting and generalization is clear. Meanwhile, analyzing simple cases like $T=2$ helps readers easily understand the relations among them. \n3. The paper is well structured." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the effect of memory buffer for linear models in continual learning. It considers two memory buffer settings, and theoretically derives the forgetting and generalization errors under linear models. Based on the theoretical results, the paper argues that a memory buffer can reduces forgetting and improves generalization under certain conditions, while may have the opposite effect under specific model size, task similarity and other factors." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper is not self-contained. The authors should introduce important previous works like [1], for instance, in the appendix or the main paper. \n2. For full rehearsal memory buffer setting, the optimization problem is same as the joint (multi-task) training. This is because in the last task $T$, the model is trained on all data and the problem is convex. The paper should discuss the relation between the results in Section 4.2 and joint training.\n3. The paper should define \"similarity\" between tasks formally, which I believe is related to $||w_j^*-w_i^*||_2$. However, in **Remark 1**, it is unclear why the following statement holds true:\n> ... This suggests that with a smaller memory buffer or a larger number of parameters, the model forgets less than without memory when all tasks are highly similar.\n4. The connection between theories and memory selection is unclear. The paper only discusses one memory selection method: reservoir sampling-based memory buffer, which limits the generalization of findings to other settings.\n\n### Reference \n[1] Theory on forgetting and generalization of continual learning. ICML 2023" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024towards,\ntitle={Towards Understanding Memory buffer based Continual Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vNGv3dJATp},\nnote={under review}\n}" }, "abstract": { "value": "Continual learning (CL) is a paradigm that adapts to and retains knowledge from a stream of tasks. Despite the growing number of experimental methods in CL, there is a lack of rigorous theoretical analysis, particularly in memory-based continual learning (MCL), which remains an open research area. In this paper, we theoretically analyze the impact of memory in CL and derive explicit expressions for expected forgetting and generalization errors under overparameterized linear models. We propose a detailed matrix decomposition of the data to distinguish between current and previous datasets, effectively decoupling the coupled data for different tasks. Additionally, we conduct a comprehensive mathematical analysis for scenarios with a small number of tasks and employ numerical analysis for larger task scenarios to evaluate the overall properties of expected forgetting and generalization errors. Compared with CL, our theoretical analysis suggests that (1) a larger memory buffer must be paired with a larger model to effectively reduce forgetting; (2) training with a larger memory buffer generalizes better when tasks are similar but may perform worse when tasks are dissimilar, while training with a large model can help mitigate this negative effect. Ultimately, our findings here sheds new light on how memory can assist CL in mitigating catastrophic forgetting and improving generalization." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "continual learning", "memory", "catastrophic forgetting", "generalization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/416341b2be0b00b016ded8dcd3945fe906d2aa32.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Towards Understanding Memory buffer based Continual Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vNQLKY7nFM
Learn2Mix: Training Neural Networks Using Adaptive Data Integration
main
Active
adaptive training;deep learning;optimization
optimization
3;3;6
3;4;3
2;1;3
2;1;3
3;3;3
4
3.333333
2
2
3
-0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Could the evaluation be strengthened by testing more complex class imbalance patterns on CIFAR-100, rather than simply assigning 0.1% to the first 50 classes and 1.9% to the remaining ones?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-structured and highly readable. Its motivation stems from a key observation: different classes have varying learning difficulties, an aspect that traditional class imbalance algorithms have largely overlooked.\n3. The paper proposes a novel approach where class proportions are dynamically adjusted based on training loss. The idea is both innovative and intuitive.\n3. The experimental validation is comprehensive and convincing, encompassing six datasets including CIFAR-100 with its 100 classes, and spanning three different tasks: classification, regression, and image reconstruction.\n4. While I haven't thoroughly examined the convergence analysis proofs, the paper provides theoretical guarantees for its approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the limitation of classical training paradigms that maintain fixed class proportions within batches, which fails to account for varying levels of difficulty across different classes and can hinder optimal convergence rates. The authors propose Learn2Mix, a training strategy that dynamically adjusts the proportion of classes within batches based on real-time class-wise error rates, directing more training emphasis towards challenging or underperforming classes, thereby accelerating model convergence and improving performance across classification, regression, and reconstruction tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The class imbalance setup for CIFAR-100 appears oversimplified, where the first 50 classes each comprise 0.1% of the data while the latter 50 classes each comprise 1.9%. A more rigorous evaluation should explore diverse imbalance patterns, such as exponentially decreasing ratios, step-wise distributions, or even real-world imbalance scenarios, to better validate the algorithm's robustness under complex class distributions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How does learn2mix, which modifies the composition of the training data, compare with other regimes such as [3], which modify the loss function? This method seems slightly more expensive because it involves the extra step of dynamically composing the training data.\n\n[3] Sagawa et al, Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The method is presented in a clear manner and ensures each example within the class-specific dataset will be chosen uniformly at random through training, even with the reweighting. \n- The experiment details are presented clearly between the main text and the appendix, and each experiment seems replicable. \n- learn2mix outperforms classical training across various classification and regression tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors present learn2mix, a training method which dynamically adapts the proportion of classes during training using class-wise error rates. They provide theoretical justifications for the performance of learn2mix and also empirically demonstrate accelerated convergence on balanced and imbalanced datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The empirical results for both classical and learn2mix are concerningly underwhelming. For example, the performance of both methods are cut off at only <40% accuracy on CIFAR-10, even though most image classification models can easily achieve 90% accuracy, with modern models reporting >99% test accuracy. Even when considering the LeNet-5 architecture which the authors benchmarked, the original LeNet-5 and MNIST paper [1] achieves 99% accuracy in 20 epochs, while this paper reports <80% accuracy in 60 epochs. This significant gap in performance makes it unclear if the faster convergence of learn2mix holds in practice with more careful training setups and realistic models.\n- The method was only compared against classical empirical risk minimization. Classical training is known to be ineffective for class imbalance, so it would be helpful to see how this method compares to other methods which adjust the training data such as oversampling. \n- The theoretical results require that the class-wise loss is strongly convex in $\\theta$; however, the loss landscape for neural networks is known to be highly non-convex [2], which challenges the relevance of their findings.\n\n[1] LeCun et al, Gradient Based Learning Applied to Document Recognition\n\n[2] Li et al, Visualizing the Loss Landscape of Neural Nets" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethics concerns" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How does the proposed method perform compared with stronger baselines for class imbalance, such as using the Focal loss?\n\n2. How should a user choose the mixing rate? Is the proposed method sensitive to the hyper-parameter?\n\n3. Since the proposed method doesn't use a fixed class weight. I wondered if this approach is more robust to distributional shifts regarding the class imbalance levels." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-written and easy to follow. The topic is related to class-imbalance and neural network training which matches ICLR well. The proposed approach is simple and effective.\n\n2. The paper provides both theoritical and emperical justification." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces \"learn2mix\", a training strategy for neural networks that dynamically adjust the weight of each class during training according to the class-wise error, which is different from the traditional methods that use a static class proportion. Using a dynamic class proportion accelerate the converge rate and improve the performance, especially when the class is imbalanced." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. My biggest concern about this paper is the evaluation. The only baseline that the authors compared is the standard training mechanisms, although curriculum learning has already been widely studied in the literature. The contribution would be much more convincing by having stronger baselines, such as using Focal loss or any curriculum learning methods.\n\n2. The results shown in Figures 1 and 2 don't seem to be fully converged." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This work introduces learn2mix, a new training strategy that adaptively adjusts class proportions in batches to accelerate neural network convergence in resource-constrained environments." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024learnmix,\ntitle={Learn2Mix: Training Neural Networks Using Adaptive Data Integration},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vNQLKY7nFM},\nnote={under review}\n}" }, "abstract": { "value": "Accelerating model convergence in resource-constrained environments is essential for fast and efficient neural network training. This work presents learn2mix, a new training strategy that adaptively adjusts class proportions within batches, focusing on classes with higher error rates. Unlike classical training methods that use static class proportions, learn2mix continually adapts class proportions during training, leading to faster convergence. Empirical evaluations on benchmark datasets show that neural networks trained with learn2mix converge faster than those trained with classical approaches, achieving improved results for classification, regression, and reconstruction tasks under limited training resources and with imbalanced classes. Our empirical findings are supported by theoretical analysis." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "adaptive training", "deep learning", "optimization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f0d7e85108e2057ce39e149af1a297af4d461976.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/01ed4e247d36f937fda7f97e7d44d2dd457a4086.zip" }, "title": { "value": "Learn2Mix: Training Neural Networks Using Adaptive Data Integration" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vNZIePda08
Sparse-to-Sparse Training of Diffusion Models
main
Active
Diffusion Models;Sparse-to-Sparse Training;Static Sparse Training;Dynamic Sparse Training
generative models
3;3;3;6
3;3;3;3
2;3;2;3
1;1;1;2
3;2;3;4
3.75
3
2.5
1.25
3
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Why are the authors specifically interested in these sparsity methods compared to other existing techniques in the literature that can *actually* reduce FLOP count and properly utilize hardware? The fixation on these specific sparse-to-sparse methods seems very poorly motivated, but I would welcome clarification on this." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper is well-written and is easy to follow.\n- The results presented improve over the dense baselines in the majority of datasets/models chosen for experiments.\n- Important explorations are included, such as studying the effect of different percentages of network sparsity and different numbers of denoising steps for inference.\n- Experiments are conducted on various models and datasets, improving confidence on the results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes the use of sparse-to-sparse pretraining for diffusion models. These techniques (specifically those known as _unstructured_ sparsity, where the vertices remain fixed but only edges/connections/weights between neurons are taken to be a subset of a dense network) have shown in prior work that they can boost the performance of a wide variety of deep learning models while theoretically resulting in less FLOPs for both training and inference. This paper applies three different sparse-to-sparse pretraining methods to various diffusion models, showing a slight boost in FID scores on various image datasets while reducing the number of FLOPs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The biggest weakness of this paper is that there is virtually nothing new happening. As the paper itself observes in its literature review, prior work has already shown that the sparsity methods explored have already been shown to achieve similar results in generative models, so the results are not surprising either. The contribution in this paper therefore feels very limited: it is showing that using this on diffusion models can result in a small quality boost and (theoretical / hardware-dependent) FLOP reduction. The techniques explored are all from prior work, with seemingly no additional technical challenges on the way to apply them to diffusion models. Please correct me if I am wrong on this (and if so, this would definitely be an important discussion to include in the paper).\n\nIt should also be noted that other methods exist where the goal is also FLOP reduction without compromising quality. For example, masked autoencoders (MAE), and more recent work like MicroDiT applying the ideas from MAE to diffusion models, explore dropping out sequence elements entirely from transformer architectures, which can result in immense computational savings in practice _with current hardware_. The paper needs to better motivate why exploring these specific methods is important, given that the motivation and goals are the same as other methods that can better take advantage / live up to the constraints of modern hardware. In particular, sequence dropout has proven to virtually sacrifice no quality with very drastic dropout rates on image and video domains.\n\nWhile the improvement of FID scores is certainly a strength of the work given that connections are being pruned, this is insufficient to demonstrate the effectiveness of any method: qualitative comparisons are key, given that the connection between FID and sample quality is not a guarantee (especially when differences are very small). This is an easy fix; the authors can provide many more samples, side-by-side with the baseline models. It is even possible to obtain extremely similar samples, simply via deterministic training and sampling with the same random seed to study the actual results more carefully. \n\nFinally, while the quantity of experiments, datasets and models is appreciated by the reviewer, one less fatal but nonwithstanding a weakness, is that the datasets utilized are of very narrow domain and results may not transfer to larger settings, which are of key interest to the community. One potential way to improve this would be to show positive results on a traditional dataset that is much more diverse and challening, such as ImageNet (as opposed to the much smaller Imagenette used in the paper). It is more typical for positive findings on challenging benchmarks like ImageNet to transfer to larger-scale tasks and models, while it is very common for results in small, narrow datasets like CIFAR10 and the datasets used in this work to not work in more interesting settings." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Why choose these two DMs? AFAIK, ChiroDiff is not a well-known model. \n2. Have experiments been done on other DMs (backbones) to test the generalization? What affects the combination of the two may not be different datasets or different generative tasks, but different network backbone architecture (e.g., U-Net v.s. Bidirectional GRU encoder). More analysis into this?\n3. For Tables 1, 2, and 4, are there criteria or reasons for choosing these specific sparsity ratios $S$? It may be necessary to supplement the ablation study of sparsity ratios $S$ and pruning rate $p$ of the three methods." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The combination of Sparse Training and DMs is proven to be effective, which can be used in the future efficient training of regular DMs without affecting other components of training and inference.\n2. Experiments are conducted on many datasets together with extensive analysis, making the methods convincing.\n3. The writing logic is great from my point of view, making readers easy to follow.\n4. The content is rigorous, e.g., good to point out the hardware limitation for sparse matrix operation (Line 56)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper integrated 2 Diffusion Models with 3 Sparse Training methods respectively, with experiments on many datasets to verify the combination of these two things is OK (reducing FLOPs while maintaining good performance, some even outperforming the dense models). This may be helpful for training time, memory, and computational savings of DMs in the future." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Majors:\n1. The biggest issue is that sparse training and DMs seem not to be coupled: there's no strong desire for me to think the combination of these two is fantastic or compatible naturally, and I also didn't see any apparent problems that would prevent the two from combining easily. It seems like this paper simply uses \"Sparse Training + DMs = Sparse DMs\", in which both Sparse Training and DMs are ready-made without innovation and without extra tricks in the combination process. As a result, although the paper has some contributions (of experiments and verification), it has NO core novelty.\n2. I don't think the methods take advantage of the unique characteristics of DMs itself. After all, the denoising phase of DMs parameterizes a neural network $p_{\\theta}$ to approximate the denoising process $q(x_{t-1} | x_t)$, so the DMs can be regarded as \"noising process + network backbone (for fitting denoising process)\". The paper uses Sparse Training in denoising backbones, however the backbones may have been verified of the combination with Sparse Training or pruning [1] [2].\n\nMinors:\n1. Refs (hyperlinks) can be changed to a different color or use a box, just like most other articles did. It's hard for me to follow the real contents with all the black letters.\n2. It seems that the page number of the first page can be incorrectly hyperlinked.\n3. More introduction should be made to Latent Diffusion and ChiroDiff.\n\n[1] Narang, S., Elsen, E., Diamos, G., & Sengupta, S. (2017). Exploring Sparsity in Recurrent Neural Networks. ArXiv. https://arxiv.org/abs/1704.05119\n\n[2] Rao, Kiran & Chatterjee, Subarna & Sharma, Sreedhar. (2022). Weight Pruning-UNet: Weight Pruning UNet with Depth-wise Separable Convolutions for Semantic Segmentation of Kidney Tumors. Journal of medical signals and sensors. 12. 108-113. 10.4103/jmss.jmss_108_21." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Is there any explanation regarding the design of the sparsity rates for training diffusion models, which appear to be predefined without any intuitive understanding based on specific concepts related to the models? Is it possible to design an adaptive sparsity schedule?\n\n2. How many GPUs were used in the training process for different DMs? Providing more details about the training settings would greatly enhance the confidence in the proposed framework.\n\n3. For a given DM, how should the decision be made regarding the training strategy—whether to use static sparsity or dynamic sparsity?\n\n4. Can you provide some experimental results on the text-to-image task, which is one of the most important practical applications of diffusion models?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper investigates a challenging problem in the diffusion framework, as current state-of-the-art methods all require large model backbones to maintain significant generative performance. Therefore, using a lightweight model to achieve comparable modeling ability is meaningful for the generative community.\n\n2. The experimental results are compelling, as the proposed framework employs a model with small capacity parameters to achieve slightly better performance, highlighting its great potential for reducing sampling latency.\n\n3. The proposed two training strategies are effective for training the lightweight model backbone, as models optimized with these strategies can match or even surpass the performance of their dense counterparts.\n\n4. This paper is easy to follow and the concept idea for the main framework is clearly presented." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "To enhance the efficiency of both training and sampling of DMs, the paper employs a sparse-to-sparse training technique to develop a lightweight model backbone that can achieve performance comparable to its denser counterpart. Since previous methods primarily focus on the efficiency of sampling in DMs, this paper demonstrates significant advantages by optimizing the diffusion framework for both fast training and sampling speeds. To achieve this goal, this paper proposes two strategies—static and dynamic training—to optimize two state-of-the-art models. Experimental results demonstrate the effectiveness of the proposed sparse-to-sparse training method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed framework appears to be an incremental application with limited novelty. Furthermore, this paper seems to rely on established sparse-to-sparse strategies for optimization without any careful design. \n\n2. The proposed method is not theoretically guaranteed, which may result in performance variability.\n \n3. The ablation studies are lacking. The validity of the model would be better established with more experimental results provided.\n\n4. It is suggested that the format of the references be made uniform, as there are discrepancies between different sections." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Is sparse training effective on larger datasets such as the full LSUN-Bedrooms dataset or ImageNet1k, which are larger than CelebA-HQ?\n- Is there a specific reason for using only the FID score as the evaluation metric? If not, it would be helpful to also include the Inception Score (IS).\n- Could you explain why performance is strong only for QuickDraw in Table 2, but not for KanjiVG and VMNIST? Is there a particular characteristic of the datasets that leads to this?\n- Have you tried using structured sparsity, which removes entire layers, to reduce inference time?\n- In Section 4.3, line 515, could you clarify whether GPU inference speed actually improves by 0.57x as mentioned? Could you provide papers or resources that demonstrate that reducing FLOPS leads to improved inference speed on hardware?\n- For Table 1, would it be possible to conduct experiments that reduce the standard deviation to below 3.0 through hyperparameter tuning on the Bedrooms and Imagenette datasets? The mean + standard deviation for sparse training (for example, 28.79 + 12.65 = 41.44 for Bedrooms Static-DM) is consistently higher than the mean for dense training (31.09 for Bedrooms Dense).\n- Do you have any insights on how to effectively tune hyperparameters such as network sparsity, exploration frequency, pruning rate, and sparse method, beyond random search or grid search?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The motivation for the paper is very clear: the high computational cost of training DMs, which drives the proposal of a prune-based DM training method.\n2. Sparse training is applied across a diverse range of datasets and models, showcasing its versatility.\n3. The experimental results are clearly presented, showing how network sparsity and pruning ratio affect the performance, providing valuable insights into hyperparameter tuning." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a weight pruning-based sparse-to-sparse Diffusion Model (DM) training method using both **static** and **dynamic** sparse pruning techniques. Through experiments on latent and Chiro Diffusion Models, the paper demonstrates that sparse training can achieve similar or improved performance compared to dense training methods while reducing the number of parameters and FLOPs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Lack of experiments on full dataset training\n - This paper only uses a portion of the CelebA-HQ and LSUN-Bedrooms datasets for the experiments. However, I believe that the performance of sparse training may decrease on larger datasets due to the reduced expressive power of the model caused by pruning. To fully evaluate the effectiveness of sparse training methods, experiments on larger datasets such as the full ImageNet or the entire LSUN-Bedrooms dataset are needed. Although Appendix C presents results from training on the full CelebA-HQ dataset, with only 30,000 images in total, CelebA-HQ is not large enough to alleviate these concerns.\n2. Lack of evaluation metrics\n - The authors have presented the FID score as the evaluation metric, but relying solely on the FID score to evaluate a Diffusion Model (DM) seems risky. It would be better to additionally present metrics such as the Inception Score (IS) proposed in the Latent Diffusion Model paper.\n3. Dependence on dataset, method, and hyperparameters\n - In Figure 2, only a few methods and sparse rates outperform dense training in CelebA-HQ and Imagenette. Due to the long search time for optimal settings, the reduction in training time mentioned by the authors seems insignificant.\n - Although ChiroDiff shows performance improvements with the QuickDraw dataset, it is hard to say that there are meaningful improvements in performance for KanjiVG and VMNIST. Sparse training lacks robustness across different datasets.\n4. Lack of analysis\n - In Section 4.1, line 376, it is mentioned that, unlike existing supervised learning and GAN models, the DM using the SST method outperforms the DST method. Additional analysis is needed to explain why this different trend is observed.\n - In Table 2, performance is good for QuickDraw but poor for KanjiVG and VMNIST. An analysis of the reasons behind this discrepancy would be useful.\n5. Lack of novelty\n - Without introducing new concepts or ideas, the paper applies the existing sparse-to-sparse training method from supervised learning to Diffusion Models. It would be better to propose a new method optimized for Diffusion Models.\n - The variance of FID scores in Table 1 is overall too large, and the reduction in FLOPS is not significant for Bedrooms and Imagenette.\n - The efficiency gained from reducing inference speed via FLOPS reduction is dependent on hardware.\n - Overall, the time taken to search for methods and hyperparameters seems too long compared to the performance improvements. Proposing methods to reduce the search time would be helpful.\n - In Section 4.3, line 515, a speed-up of 0.57x is mentioned, but it is unclear whether GPU inference time is improved." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce sparse-to-sparse training to Diffusion Models, and obtain sparse DMs that are able to match and sometimes outperform the dense versions." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024sparsetosparse,\ntitle={Sparse-to-Sparse Training of Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vNZIePda08},\nnote={under review}\n}" }, "abstract": { "value": "Diffusion models (DMs) are a powerful type of generative models that have achieved state-of-the-art results in various image synthesis tasks and have shown potential in other domains, such as natural language processing and temporal data modeling. Despite their stable training dynamics and ability to produce diverse high-quality samples, DMs are notorious for requiring significant computational resources, both in the training and inference stages. Previous work has focused mostly on increasing the efficiency of model inference. This paper introduces, for the first time, the paradigm of sparse-to-sparse training to DMs, with the aim of improving both training and inference efficiency. We train sparse DMs from scratch (Latent Diffusion and ChiroDiff) using three different methods (Static-DM, RigL-DM, and MagRan-DM) to study the effect of sparsity in model performance. Our experiments show that sparse DMs are able to match and sometimes outperform their Dense counterparts, while substantially reducing the number of trainable parameters and FLOPs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion Models", "Sparse-to-Sparse Training", "Static Sparse Training", "Dynamic Sparse Training" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/15de4c4ba3347a65b2731867b26416ab57d452b8.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Sparse-to-Sparse Training of Diffusion Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vNdOHr7mn5
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
main
Active
Sparsity;Regularization;Neural Networks;Overparametrization
unsupervised, self-supervised, semi-supervised, and supervised representation learning
1;6;6;8;8
4;4;4;3;4
3;3;3;3;3
1;3;3;3;3
2;3;3;4;3
5.8
3.8
3
2.6
3
-0.429478
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "1. We agree that studying DWF in very large models would be an interesting avenue worth exploring, but our work instead focuses on the analysis of the sparsification and training dynamics on a range of different architectures. Moreover, the scope of our experimental evaluation is in line with many other works on sparse learning (e.g., Lee et al., 2019; Frankle et al., 2021; Lu et al., 2022; Dhahri et al., 2024).\n2. We argue that random pruning constitutes a meaningful baseline, providing a measure of the intrinsic prunability of a specific network architecture conditional on a learning task (Liu et al., 2022). Regarding iterative pruning methods, these approaches have been found to be impractical and computationally exceedingly expensive and were thus excluded from our comparison. For example, Glandorf et al. (2023) state that iterative magnitude pruning for, e.g., a VGG-19 on CIFAR10/100 requires 860 epochs in their experiments, whereas DWF training is feasible without repeatedly re-training the model.\n\n**References**:\n\nLee, N., T. Ajanthan, and P. Torr. \"SNIP: single-shot network pruning based on connection sensitivity.\" International Conference on Learning Representations. 2019.\n\nTanaka, Hidenori, et al. \"Pruning neural networks without any data by iteratively conserving synaptic flow.\" Advances in neural information processing systems 33 (2020): 6377-6389.\n\nFrankle, Jonathan, et al. \"Pruning Neural Networks at Initialization: Why Are We Missing the Mark?.\" International Conference on Learning Representations. 2021.\n\nLu, Miao, et al. \"Learning pruning-friendly networks via frank-wolfe: One-shot, any-sparsity, and no retraining.\" International Conference on Learning Representations. 2022.\n\nLiu, Shiwei, et al. \"The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training.\" International Conference on Learning Representations. 2022.\n\nGlandorf, Patrick, Timo Kaiser, and Bodo Rosenhahn. \"HyperSparse Neural Networks: Shifting Exploration to Exploitation through Adaptive Regularization.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\n\nDhahri, Rayen, et al. \"Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks using the Marginal Likelihood.\" Sixth Symposium on Advances in Approximate Bayesian Inference-Non Archival Track. 2024." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": { "value": "Response to minor weaknesses" }, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Dear reviewer dKGU,\n\nWe would like to thank you for your comments and for bringing references [1] and [2] to our attention. We appreciate the opportunity to clarify the differences between our work and both references. We would like to mention that the ICLR reviewer guidelines explicitly state that in the case of unpublished papers, including arXiv papers (such as [2]), the authors may be excused for not discussing these references. As detailed below, we politely disagree with the review’s conclusion and maintain that both references differ significantly from our work in important aspects. However, as related studies, we will include them in the further related literature section. A response to the minor weaknesses is included in a separate comment.\n\nDifferences to [1]:\n\n- Analysis of the representation cost (squared $L_2$ norm of the weights) of a **linear predictor** under **specific linear neural network parametrizations** is a related but significantly different topic of study. Our approach, instead, aims to induce differentiable sparsity regularization in arbitrary non-linear networks.\n- Representation cost analysis is based on the equivalence of global minima. Our results, however, extend to the equivalence of **all (local) minima**, which is an important optimization property ensuring no spurious solutions are created.\n- [1] conducts a **purely theoretical analysis** for a restricted problem class of overparametrized linear models. In contrast, our work focuses not only on the theoretical equivalence of minima under Deep Weight Factorization (DWF) of arbitrary learning models but, importantly, also **investigates how to successfully train factorized networks** by providing a novel initialization scheme and a detailed analysis of the training dynamics. The relevant result in [1] states that the representation cost of a linear predictor using a diagonal linear network (essentially a factorized linear model) is equal to the $L_{2/D}$ quasi-norm of the predictor and is obtained as a special case of our Theorem 1 applied to a linear model. Apart from that result, our work is fundamentally different from [1] in both scope and analysis. In particular, [1] does not discuss how to use the representation cost analysis for differentiable sparse learning.\n- Our focus on details such as initialization and learning rates in addition to our more general theoretical results ensures that they translate into a practical method that is readily usable instead of the sole theoretical focus of [1]. \n\nDifferences to [2]:\n- The pWD method proposed in [2] uses a different variational expression of non-convex $L_{2/D}$ regularizers based on the so-called Eta trick (Bach, 2012). While this approach is closely related to the variational expression we use in our work (cf. Poon and Peyré, 2021), it causes divergent gradients of the auxiliary variables for vanishing weights and requires proximal methods for stable optimization.\n- [2] proposes optimization of their overparametrized problem using Alternating Convex Search, whereas our goal is a method that is amenable to straightforward stochastic gradient descent (SGD). In particular, our motivation is to provide a fully differentiable formulation of $L_{2/D}$ regularized objectives that facilitates seamless integration with other methods and usability by the research community **without deviating from standard SGD optimization**. \n- In contrast to [2], we analyze 1) the phased training dynamics, 2) the evolution of model weight norms, 3) layer-wise sparsity, and 4) the onset of sparsity, which indeed is one major focus of our submission.\nThe alternative variational expression of the $L_{2/D}$ regularizer in pWD and its proximal optimization induce different gradients and therefore learning dynamics, and thus constitutes a distinct approach besides its similarities to DWF at first sight.\n\n**Summary**:\nWhile both [1] and [2] are indeed related to our work, we believe there are major differences and disagree with the claim the main contributions of our work are already encapsulated in the two references. In particular, the scope and focus of [1] are only loosely connected to our study, differing in theoretical versus practical emphasis and the restriction of [1] to linear models and global minima. On the other hand, [2] employs proximal alternating optimization, whereas we explicitly focus on differentiability and use straightforward SGD. \n\nWe hope this clarifies the distinctions and highlights the novelty of our work. We would like to thank you again for pointing out these references. We will meticulously incorporate them into the related literature and highlight their importance while also pointing out differences.\n\nReferences:\n\nBach, Francis, et al. \"Optimization with sparsity-inducing penalties.\" Foundations and Trends® in Machine Learning (2012): 1-106.\n\nPoon, Clarice, and Gabriel Peyré. \"Smooth bilevel programming for sparse regularization.\" NeurIPS 34 (2021): 1543-1555." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": { "value": "Response to major weakness" }, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "As stated in the weaknesses section, I would suggest the authors reframe the work as an exploratory study of the dynamics of sparsification in deep networks and learned representations, and submit to another venue, after conducting a more thorough literature survey." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper has some strengths:\n\n1) The topic of sparsification during training is of great interest with growing model sizes.\n2) The paper is well-written and presented.\n3) The equivalence between the different regularization schemes is clearly explained and shown.\n4) Section 4 is particularly interesting to me, as I am not aware of other studies which attempt to analyze the sparsification dynamics, or attempt to interpret unstructured sparsification algorithms.\n5) The ideas presented in the paper are very good, but unfortunately suffer from a lack of novelty as I discuss in the weaknesses section." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces deep weight factorization, an extension of shallow weight factorization for neural network sparsification. The key idea is to decompose network weights into D factors (D≥2), allowing the use of standard $L_2$ regularization to achieve a similar optimization problem to that of the regularized $L_p$ loss, while avoiding smoothness problems. The authors study the performance of this algorithm as well as analyze the dynamics and representations learned during sparsification." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weakness of this paper is that, unfortunately, the method proposed has already been implemented before with minimal difference, both in its current form based on Hadamard products [1] for linear networks and in a simpler completely equivalent form [2] incorporated directly to weight decay for any $p$-norm, including $2/d$. \n\nTherefore the paper offers no real novelty besides the analysis of the onset of sparsity, and since that is not the main focus of the paper it does not justify acceptance. I would suggest the authors reframe the work as an exploratory study of the dynamics of sparsification in deep networks, and submit to another venue.\n\nMinor weaknesses:\n\n1) The datasets and tasks are not studied at large scale, which is the most interesting setting in which sparsification should be performed. In particular, focusing either on small models or too simple data results in untrustworthy results when scaling up. Since this is not the first time this method is proposed, a potential avenue would be studying these types of $L_p$ methods in very large models, or for fine tuning.\n2) The baseline of random pruning is not a useful baseline, basic magnitude pruning during training should at least be explored.\n\n**References:**\n\n[1] *Representation Costs of Linear Neural Networks: Analysis and Design*, Zhen Dai, Mina Karzand, Nathan Srebro, NeurIPS 2021 (https://openreview.net/forum?id=3oQyjABdbC8)\n\n[2] *Decoupled Weight Decay for Any p Norm*, N. Outmezguine, N. Levi (https://doi.org/10.48550/arXiv.2404.10824)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The equivalence of DWF to the $L_p$ regularization largely relies on the fact that balanced factorization (or zero factor misalignment) is optimal. Would you provide some intuitive explanation that regularization by the balanced DWF would imply (or encourage) the sparsity of weights?\n\n2. Whether the training for DWF could achieves a global minima, given that the weight factorization approach is non-convex?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The author presented a generalization of the shallow weight factorization to overcome the non-differentiability of $L_1$ regularization and proved its equivalence to the $L_\\frac{2}{D}$ regularization.\n2. The strategies like initialization and large learning rate ensure the application of DWF.\n3. The experiments show that DWF outperforms the original weight factorization and several pruning methods.1. The motivation of the paper is kind of vague. Why extending the shallow weight factorization in [1] to the deep one could potentially result in performance gain? One suggestion is: building on the equivalence of shallow weight factorization with the $L_1$ regularization, and taking into account that $L_p (p<1)$ regularization would potentially yield higher sparsity thanks to it is closer to the $L_0$ regularization compared to $L_1$ regularization, it is kind of natural to conjecture that deep weight factorization is equivalent to the $L_\\frac{2}{D}$ (where $D$ denotes the depth of the factorization) regularization." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper extends existing (shallow) weight factorization techniques to deep weight factorization(DWF), and demonstrates the equivalence of DWF with the $L_\\frac{2}{D}$ (where $D$ denotes the depth of the factorization) regularization, thus potentially achieving higher sparsity compared to its shallow counterpart. The authors also provided enhanced strategies for initialization and learning rate to ensure the performance of DWF. Additionally, they analyzed the impact of DWF on training dynamics and optimization by empirical experiments, which show that DWF outperforms the shallow weight factorization and several existing pruning approaches." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The motivation of the paper is kind of vague. Why extending the shallow weight factorization in [1] to the deep one could potentially result in performance gain? One suggestion is: building on the equivalence of shallow weight factorization with the $L_1$ regularization, and taking into account that $L_p (p<1)$ regularization would potentially yield higher sparsity thanks to it is closer to the $L_0$ regularization compared to $L_1$ regularization, it is kind of natural to conjecture that deep weight factorization is equivalent to the $L_\\frac{2}{D}$ (where $D$ denotes the depth of the factorization) regularization.\n\n2. In Figure 1, does the different compression ratios of vanilla $L_1$ regularization relevant to different values of $\\lambda$? To my knowledge, achieving the highest test accuracy at various compression ratios often requires selecting different $\\lambda$ values. If the value of $\\lambda$ is fixed across different compression ratios, it would be beneficial to search $\\lambda$ at different compression ratios.\n\n3. Considering DWF is a pruning-before-training method, it is not much fair to only conduct comparison with those pruning-before-training methods, such as SNIP and SynFlow. It is suggested to add comparison with some high-performance pruning-after-training methods, such as [2] and [3] (pruning with $L_1$ regularization).\n\n[1] Liu Ziyin and Zihao Wang. spred: Solving l1 penalty with sgd. In International Conference on Machine Learning, pp. 43407–43422. PMLR, 2023.\n\n[2]Renda, A., Frankle, J., and Carbin, M. Comparing rewinding and fine-tuning in neural network pruning. arXiv preprint arXiv:2003.02389, 2020. 3, 4, 6, 9. \n\n[3]Zhang, Q., Zhang, R., Sun, J., & Liu, Y. (2023). How Sparse Can We Prune A Deep Network: A Fundamental Limit Viewpoint. *arXiv preprint arXiv:2306.05857*." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The questions marked with * are most important.\n\n*Q1. I think the contributions of the work are sufficient for publication, but I would like to know the authors’ position on W1.\n\n*Q2. In Fig 7 (and related plots in App E), what is the step-size selection protocol? Is it tuned individually for each depth?\n\nQ3. For Sec 5.1, why is the magnitude pruning without fine-tuning protocol justified? Especially at high compression levels.\n\n*Q4. The paper mentions explicitly (L151) “in principle [...] the factorization can also be selectively applied to arbitrary subsets of the parameters w” in the context of unstructured sparsity (in implicit contrast with structured sparsity). Orthogonal to the structured vs unstructured discussion, what is the relationship between DWF and the type of layers involved? Do certain layers benefit or suffer disparately with different factorization depths?\n\nQ5A. I was pleasantly surprised to read the conclusions of the runtime analysis in Sec 5.2 and App I.2. Given (1) the claimed low computational overhead induced by DWF and (2) the change in optimization dynamics caused by DWF, what do the authors think could be the effects of this factorization idea on the training of non-regularized objectives?\n\n*Q5B. As I stated in W1, the paper focuses on relatively small-scale experiments. How do the authors think the experimental conclusions of their runtime analysis would transform at extremely high parameter counts (>1B)?\n\nQ5C. Along the same line of high parameter counts as in Q2B, do we need to “persistently hold” a (multi-factor) reparametrization during training? During the forward phase, it makes no difference to use the collapsed or not-collapsed parametrization, the only difference is for the gradient application. Would it be possible to hold an “ephemeral reparametrization”, sampled on demand during the parameter update? The idea would be to locally simulate the reparameterized GD dynamics, without requiring the persistent footprint of multiple model copies.\n\nQ6. I noticed the absence of optimization “momentum” (as in Nesterov/Polyak) in the experimental details (as presented in App G.2). I understand that this could be a consequence of the specific architectures and tasks considered in the submission. However, I would like the authors to expand on any potential consequences/challenges of using DWF with momentum-based optimizers.\n\n*Q7. The paper begins almost exclusively concerned with the case of the Lasso regularizer (q=1). Eventually, the theoretical connection presented in Thm 1 is established with a fractional regularized (q=2/D). Is this departure from the Lasso case “necessary”, or merely an artifact of the analysis/proof? In other words, if the problem I truly care about is the Lasso, can I still take advantage of the DWF-induced dynamics?\n\nMinor comments/suggestions (No response expected)\n* In Fig 7 (and more generally App E), the case “Depth=1” would be a useful baseline to understand how much of the improvement is afforded by DWF.\n* There are several instances in the paper where the authors mention that “Appendix X contains information on the relationship between Y and Z”, but do not highlight any core insight on the main paper. I believe the current appendices are quite valuable, and the submission could benefit from (very!) briefly summarizing said key discoveries in the main body.\n* Given the many appendices present, the authors may consider including a table of contents for the appendix to aid navigation." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "S1. The paper is well structured, and the presentation of the ideas is easy to follow.\n\nS2. While the idea of generalizing “weight factorization” to multiple factors is quite natural, the authors do a good job at motivating it, and providing theoretical and empirical insights on their proposed DWF technique.\n\nS3. The authors transparently explore the empirical uncertainties stemming from the change in parametrization. In particular, I appreciated the section on weight initialization: it identifies challenges with traditional initializations, discusses a potential explanation (based on the kurtosis of the distributions) and designs/prescribes a new initialization scheme that is more suitable for use in conjunction with DWF.\n\nS4. The problem of neural network sparsity is very relevant in the current large model context. The simple core principles of the DWF approach, along with the theoretical insights presented by the authors, make it an appealing technique for the deep learning community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper is concerned with sparsity-inducing training of neural networks. The fundamental (pre-existing) technical tool is the transformation of a (non-differentiable) L1-penalized problem, into an “equivalent” (differentiable) L2-penalized problem via a Hadamard-product reparametrization of the weights. This paper introduces Deep Weight Factorization (DFW), an extension of the previous idea to reparametrizations including D \\ge 3 Hadamard factors, and shows an equivalence with non-convex sparse regularization. The authors present an analysis of the interplay between DFW and common optimization aspects such as the weight initialization and the choice of step size. Experimentally, the paper includes several comparisons against existing sparsity methods on vision benchmarks, favorably to DWF." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1. The paper exclusively focuses on vision-related tasks, of relatively small scale. \n * The submission does not provide results on ImageNet-scale vision tasks.\n * The submission does not include any experiments on language or image generation.\n\nW2. Theorem 1 presents an equivalence between the DWF and “original” non-convex penalized problem, claiming that they “have the same global and local minima”. However, the current discussion does not explore whether the relative quality of a local DWF minimum is or not comparable to the quality of its related minimum for the original problem. In other words, let $\\hat{\\omega}_1 \\odot \\ldots \\odot \\hat{\\omega}_D = \\hat{w}$ be a local minimum of the DWF problem. How good of a local minimum is $\\hat{w}$ for the original problem?\n\nW3. I found the visual presentation of experimental results –in particular, Fig 6, and Fig 7 bottom– too crowded, making it difficult for the reader to extract the relevant information." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- On lines 475-6, the authors claim that “For VGG-19 on CIFAR100 at 10% tolerance, DWF (D = 4) achieves a compression ratio of 1014, surpassing the runner-up (SynFlow at 218) by a factor of almost 5.” Perhaps I’m misreading the bottom-left plot in Figure 9, but don’t $D=2$ and D=3 both outperform $D=4$ at the 10% tolerance threshold (i.e. ~60% accuracy)? Furthermore, $D=2$ may be the best-performing method in this regime (though it is very close to $D=3$) – and this corresponds to shallow weight factorization, which is not an original contribution of this paper.\n- Why use LeNet architectures for the MNIST experiments? To my knowledge, these are essentially never used in practice. Is there a particular scientific reason for using a very simple architecture in these experiments?\n- The plots in Figures 6 and 7 are quite cluttered. Would the authors consider depicting curves for fewer values of $\\lambda$ in Figure 7? They could also be more selective with the information they present in Figure 6." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper develops a natural extension of existing shallow weight factorization methods to construct differentiable regularizers whose corresponding optima are equivalent to those obtained by non-convex and non-differentiable $L^{(2/D)}$ regularization. This is a valuable – and to my knowledge original – theoretical contribution to the literature.\n- The authors present extensive empirical results to study the impact of choices such as initializations and learning rates on their method’s performance. Doing this legwork to resolve critical pitfalls in the implementation of their method enables other researchers to use their work in practice, and substantially increases their method’s potential for impact.\n- The paper is generally well-written, and I was able to follow its exposition without too much trouble." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper extends earlier results on weight factorization and proposes a differentiable (though non-convex) regularized training objective for neural networks whose global and local optima coincide with those of a problem with the same training loss and with $\\frac{2}{D}$-regularization on the network weights. For $D \\geq 2$, the naive regularizer is sparsity-inducing but non-differentiable, and solutions to the latter problem fail to achieve competitive results at high compression ratios. The proposed solution is to factorize the network weights into a product of $D$ matrices and apply $L_2$ regularization to each factor independently; the authors call their factorization “deep weight factorization” (DWF). The paper’s key result, Theorem 1, conclusively supports the correctness of their method in theory – though there is no guarantee that one reaches equally desirable local minima in practice while solving each problem.\n\nThe authors then note that standard neural network weight initializations yield poor results for DWF and propose an alternative initialization that fixes the variance of the weights while avoiding weights that are too close or far from 0. They also study the impact of learning rate schedules on the trained networks’ sparsity-accuracy tradeoffs. They finally present empirical results to show that DWF outperforms shallow weight factorizations (albeit marginally for datasets such as CIFAR10/100 and Tiny Imagenet) and post-training pruning methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My primary critique of this paper is that DWF’s performance improvements over shallow weight factorization and Synflow (in terms of compression ratio achieved at acceptable cost to accuracy) seem marginal for CIFAR10/100 and Tiny Imagenet. DWF does improve over competing methods by large margins on toy problems such as MNIST and its variants – but the gap between its relative performance on MNIST and on somewhat more complex datasets such as CIFAR and Tiny Imagenet makes me wonder whether the claimed improvements would become even narrower once one leaves the realm of standard ML benchmarks and tackles real-world problems. In light of this and my first question below, the authors may consider tempering their claims regarding the empirical benefits of their method over shallow weight factorization and pruning." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "Not applicable." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please refer to the Weaknesses section." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The presentation is very clear, with a nice balance of theoretical investigations coupled with empirical evaluations.\nI enjoyed reading this paper.\n\nI find Figure 2 to be especially clear and informative, offering a nice intuitive overview of the approach.\n\nI would also like to highlight the thoroughness of the appendix which provides appreciated details and references to related literature.\n\nOverall, factorizing the weights of a network to arbitrary depths to ensure the sparsity of the trained model upon collapsing the weights is an interesting idea that I find worth exploring." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Building upon the fact that regularizing neural networks with an L1 penalty is not adequate due to its non-smoothness and its limited compatibility with SGD, the authors investigate the application of the weight factorization framework to induce sparse weights upon training of deep learning models.\n\nThey extend the ideas of [Hoffn, 2017] and [Ziyin & Wang, 2023] to arbitrary factorization depths of neural network weights, and provide theoretically grounded equivalence of training deep factorized neural nets and sparse optimization problems (Theorem 1).\n\nThe authors then address the difficulties in optimizing their model with standard optimization practices and suggest more suited strategies. Furthermore, they analyze the training dynamics of these networks (namely the accuracy-sparsity interactions during training) and provide an empirical evaluation of the performances of their approach against other sparsification frameworks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I will provide here some suggestions on the presentation as well as technical questions.\n\n* Figure 1 displays accuracy as function of compression ratio (CR), but we don't know what that refers to at that point (it is introduced later in section 2.1). I would suggest defining CR in the caption of the figure or at least point to its definition.\n\n* I am not sure to understand what Figure 3 aims to show. Are the curves' colors referring to the value of $c$ in Definition 1 ?\n\n* I do understand the inevitability of a sparsity-accuracy tradeoff, and acknowledge the link made by the authors regarding the use of large LRs and generalization performance. However, I still fail to wrap my head around why the LR has such a significant impact on the sparsity of the collapsed weights, given that the authors previously show the equivalence between DWF and sparse regularization. At convergence, shouldn't sparsity be present no matter what ? I would greatly appreciate if you could perhaps provide me some more intuition on this.\n\n* I acknowledge the memory and computational advantages in using sparser neural networks, but I was wondering how relevant these advantages would be if one could instead just directly train a dense but smaller (with less parameters) network to begin with. If a large but sparse network can achieve competitive performance with its dense counterpart, wouldn't it be fair to assume that a small but dense network could also be competitive ? If so, what would be the point in sparsifying the large network ?\n\n* This is more of a research question, but there seems to be intricate interactions between accuracy and sparsity in DWF models. Do you plan on investigating these from a theoretical standpoint ? Perhaps it could be interesting to study the drops in accuracy observed in Figure 1 after a certain compression ratio is reached ?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a deep weight factorization for sparse neural networks that enables smooth optimization of non-convex sparse regularization." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024deep,\ntitle={Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vNdOHr7mn5},\nnote={under review}\n}" }, "abstract": { "value": "Sparse regularization techniques are well-established in machine learning, yet their application in neural networks remains challenging due to the non-differentiability of penalties like the $L_1$ norm, which is incompatible with stochastic gradient descent. A promising alternative is shallow weight factorization, where weights are decomposed into two factors, allowing for smooth optimization of $L_1$-penalized neural networks by adding differentiable $L_2$ regularization to the factors. \nIn this work, we introduce deep weight factorization, extending previous shallow approaches to more than two factors. We theoretically establish equivalence of our deep factorization with non-convex sparse regularization and analyze its impact on training dynamics and optimization. Due to the limitations posed by standard training practices, we propose a tailored initialization scheme and identify important learning rate requirements necessary for training factorized networks.\nWe demonstrate the effectiveness of our deep weight factorization through experiments on various architectures and datasets, consistently outperforming its shallow counterpart and widely used pruning methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Sparsity", "Regularization", "Neural Networks", "Overparametrization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2d753d1d77d83ea71e7fdf236ba8c9926b16eabf.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/788731bb9eec0a9f9136743fb0e4561af29a37f1.zip" }, "title": { "value": "Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vOFx8HDcvF
Stochastic Bandits Robust to Adversarial Attacks
main
Active
Robust Algorithms;Multi-armed Bandits;Adversarial Attacks
learning theory
6;6;6
4;2;4
3;3;3
2;2;3
2;3;3
6
3.333333
3
2.333333
2.666667
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See 4 and 6 in the Weaknesses section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This paper addresses a gap in the literature, recognizing that adversarial attacks have not been thoroughly explored within the classical multi-armed bandit (MAB) framework and effectively filling this gap.\n- The authors examine both additive and multiplicative bounds, providing a clear comparison that shows which approach performs better based on the attack budget C.\n- Figures 1 and, especially, Figure 2 nicely illustrate the results of attack-based multiplicative and additive bounds, offering a well-structured presentation that I haven't seen in comparable works with this level of detail.\n- I also like seeing the clear separation between corruption and attack results/settings in one place.\n- The paper presents novel findings and situates them within the existing literature, demonstrating that the derived upper bounds are tight (known C case)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates the classical MAP problem in the adversarial attack setting. The authors provide several tight results covering the case when the attack budget is known/unknown, multiplicative and additive bounds, as well as lower bounds." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Algorithm Design: I didn’t notice any novel or original elements in terms of algorithm design. The PE algorithm has been applied in this context in prior work (cited below), and the idea of using CORRAL has already been explored in similar settings, such as in Misspecified Gaussian Process Bandit Optimization. However, I only find this to be a minor weakness of the paper. \n\n2. Terminology: I like the terminology of “attacks” to distinguish it from the classical “corrupted” setting. However, if the authors intend to introduce this terminology shift, they should properly credit the original paper that first explored this setting and provided robust algorithms: “Corruption-Tolerant Gaussian Process Bandit Optimization.” To my knowledge, this was the first work to present robust algorithms for scenarios in which the attacker can observe the learner's decisions.\n\n3. Literature Review: The literature review in this paper can be improved. The reference section is also too brief and lacks organization. For example, Bogunovic et al. (2020), as cited, do not address the linear setting; this is covered in other relevant papers that are not cited, such as Stochastic Linear Bandits Robust to Adversarial Attacks and A Robust Phased Elimination Algorithm for Corruption-Tolerant Gaussian Process Bandits.\n\n4. Lower Bound Claim: The paper claims that the lower bound result of Ω(KC) is new; however, this result is already established in Stochastic Linear Bandits Robust to Adversarial Attacks (see Appendix C.3). The proof and exposition provided here are quite similar to those in the mentioned paper.\n\n5. Venue Suitability: I’m not entirely sure this paper is a strong fit for ICLR, as I’m not aware of similar works published at this venue previously. This is a consideration for the authors, as they might find broader reach at an alternative venue.\n\n6. Clarity of Comparison (Lines 417-422): The comparison in this section is unclear, and I would appreciate a clearer exposition/steps, especially since the reference provided here is incorrect." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "None" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper advances the state of the art on algorithms robust to adversarial attacks. The paper is well-written and the relationship/improvement relative to previous work is well described." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the design of stochastic bandits algorithms robust to adversarial attacks. In particular, the paper considers an easier setting in which the learner is aware of the attacker budget, and a harder setting in which the learner is not aware of the attacker budget. These results are complemented by lower bounds. Finally, the authors provide an experimental analysis that shows the effectiveness of the proposed approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The technical contribution is quite weak. For instance, the algorithmic approaches follow previous work and the analysis is not very involved." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Can you please mention how the lower bounds change or are implied from the corruptions setting to the attack setting in case of unknown horizons?\n\n2. Can you please explain if the gap-dependent results in the unknown corruption case can be obtained for the algorithms under consideration? \n\n3. Can you please explain why the algorithms potentially perform worse in low corruption settings in the experiments?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper differentiates between attack and corruption models of manipulating multi-armed bandits. It provides insights into the difference between corruption and attacks in terms of the required corruption/attack budget and thus the increased difficulty in preventing attacks compared to corruption.\n\n2. For the successive elimination algorithm SE-WR with increased confidence, also used in Lykouris et al. (2018), the paper shows a tighter regret bound, by better analysis of the concentration results which leads to $O(KC)$ term instead of a gap dependent term in Lykouris et al. (2018).\n\n3. The authors also give a gap-independent bound for SE-WR and extend the SE-WR algorithm to work in the unknown attack budget settings. They also provide an analysis of the resulting algorithms. \n\n4. The paper also provides experimental evidence showing the effectiveness of their algorithms against the attack strategies developed by Jun et al. 2018 in comparison with multiple MAP defense strategies proposed in the literature." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies stochastic bandit algorithms which are robust to adversarial attacks under a strong adversary that can see the observed arm before attacking. \nThe paper considers settings with unknown budget cost or known budget cost $C$.\nIn the known budget case, they provide a gap dependent $((\\frac{K}{\\delta}) \\log T + KC))$ upper bound that matches the lower bound. They also give gap-independent extensions with upper bounds of $\\tilde{O}(\\sqrt{KTC})$ or $\\tilde{O}(\\sqrt{KT} + KC)$ bound.\n\nFor the unknown case, they show two stopping criteria-based algorithms, one with an additive dependence in C: $O(\\sqrt{KT} + KC^2)$. They show an algorithm that gets $O(T^\\alpha)$ regret without corruptions, must have at least $O(T^\\alpha + C^\\beta)$ regret for $\\beta \\geq \\frac{1}{\\alpha}$, thus this upper bound matches this lower bounds in exponents of $C$ (given that it has $\\sqrt{T}$ dependence without $C$). Similarly, they give algorithms with multiplicative dependence on $C$ for the regret, that is $\\tilde{O}(\\sqrt{KC}T^{\\frac{2}{3}})$ or $\\tilde{O}(KC\\sqrt{T})$.\n\nThe paper also provides experimental evidence showing the effectiveness of their algorithms against the attack strategies developed by Jun et al. 2018 comparing it will other corruption-robust MAB algorithms studies in the literature." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The results or the discussion do not clarify whether gap-dependent results can be obtained for the unknown horizon setting. \n\n\n2. Since this paper focuses on making the distinction between attacks and corruption, it seems the main difference is in the inability to use randomization to reduce the scale of the attack, resulting in the need for deterministic algorithms where potentially any arm suffers from all of the corruptions. Thus in the known corruption level case, the results for the setting are directly implied by earlier. (Although this work does a tighter analysis in terms of gaps). A thorough analysis of the lower bounds comparing the setting rather than just comparing the dependence on $K$ could benefit the reader. \n\n3. Experimental results don't have confidence bars, and in the case of no corruptions with known budgets, the STOP algorithms perform worse than other methods. Some discussion on the performance in the absence of corruption is warranted.\n\nNit:\n1. In general, the writing of the paper is very focused on presenting as many results are possible and is very dense in terms of results. The paper could have been formatted better with more discussions around interpreting the results rather than having so many results in the main paper.\n\n 2. There are some typos and inconsistencies in the theorem statements and proofs. For eg in proof of Lemma 14:\n i) the constants are changed from 36 to 64 in $N_k$.\n ii) Line 798, it should be 'triggered' instead of 'trigger'\n iii) Similarly lines 804 to 807 on page 15 in the same proof have $\\delta$ with subscripts." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A comprehensive study of bandit algorithms robust to adversarial attacks." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024stochastic,\ntitle={Stochastic Bandits Robust to Adversarial Attacks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vOFx8HDcvF},\nnote={under review}\n}" }, "abstract": { "value": "This paper investigates stochastic multi-armed bandit algorithms that are robust to adversarial attacks, where an attacker can first observe the learner's action and *then* alter their reward observation.\nWe study two cases of this model, with or without the knowledge of an attack budget $C$, defined as an upper bound of the summation of the difference between the actual and altered rewards. For both cases, we devise two types of algorithms with regret bounds having additive or multiplicative $C$ dependence terms.\nFor the known attack budget case, we prove our algorithms achieve the regret bound of ${O}((K/\\Delta)\\log T + KC)$ and $\\tilde{O}(\\sqrt{KTC})$ for the additive and multiplicative $C$ terms, respectively, where $K$ is the number of arms, $T$ is the time horizon, $\\Delta$ is the gap between the expected rewards of the optimal arm and the second-best arm, and $\\tilde{O}$ hides the logarithmic factors.\nFor the unknown case, we prove our algorithms achieve the regret bound of $\\tilde{O}(\\sqrt{KT} + KC^2)$ and $\\tilde{O}(KC\\sqrt{T})$ for the additive and multiplicative $C$ terms, respectively.\nIn addition to these upper bound results, we provide several lower bounds showing the tightness of our bounds and the optimality of our algorithms.\nThese results delineate an intrinsic separation between the bandits with attacks and corruption models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Robust Algorithms", "Multi-armed Bandits", "Adversarial Attacks" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/30184ad9513780ea70c8e2149a9fbad9b28e59c2.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Stochastic Bandits Robust to Adversarial Attacks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vOSwtXGSA2
An Adaptive Defense Against Adversarial Patch Attacks For Vision Transformers
main
Active
vision transformer; adversarial patch attack; adptive defense
alignment, fairness, safety, privacy, and societal considerations
3;3;5;5
3;4;3;4
3;2;2;3
2;2;2;2
3;2;3;3
4
3.5
2.5
2
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the \"Weakness\" part for my questions." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The framework of NeighborViT is detailed presented.\n2. The idea of distinguishing different types of adversarial patch attacks and adopting corresponding defense methods is pretty interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents NeighborViT, a novel adaptive defense framework designed to counter adversarial patch attacks for ViTs. NeighborViT stands out by detecting and categorizing different types of attacks on inputs and applying adaptive, tailored defense mechanisms for each type of attack. Experimental results demonstrate that NeighborViT significantly enhances robust accuracy without compromising clean accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The attacks in experiments include one patch, such as Figure 5, while adopting corresponding defense methods for various attacks. It makes me quite confused. Therefore, I don't think the results can demonstrate the idea of adopting corresponding defense methods. For instance, experiments on adversarial examples with more adversarial patches should be performed to validate the performance of the proposed and baseline methods.\n2. Since the attack detector and area detector are extra modules, the additional time cost (e.g., seconds per epoch), computational cost (e.g., FLOPs), and GPU memory usage of the framework should be carefully illustrated. In practice, training costs play an important part in the application. Improving adversarial robustness may not pay the overhead training cost without extra experiments. For example, authors can perform a detailed ablation study of each module and other baselines in their default settings.\n3. The framework is an input process defense method, which aims at filtering out the adversarial perturbations for better adversarial robustness. In the manuscript, authors compare their method with some representative robust ViT frameworks. However, some classical input process defense methods are not considered in this paper, leading to thin arguments. For example, classical input process defense methods like smoothing, quantization, JPEG compression, and the recent work \"Diffusion Models for Adversarial Purification\" should be compared in their setting to figure out the effectiveness of their method.\n4. The writing of the paper should be improved, which is unclear to me. For example, the illustration of \"catastrophic\" and \"non-catastrophic\" attacks is confusing. It is first proposed in the \"Background & Related Works\" without any explanation or reference. Only a simple introduction in L151-153 says \"The catastrophic attacks represent the attacks occurred in the essential areas and non-catastrophic attacks represent the attacks located in the non-essential parts\" with almost no information but two new words \"essential\" and \"non-essential\" parts. In subsection 3.2, the authors demonstrate that the \"essential\" parts contain essential features for model classification while the \"non-essential\" parts do not. However, there seems to be no detailed explanation of essential features, except from a plain description in L291-292. On the contrary, excessive words focus on how to get essential features by their area detector, leading to a reversed order. Papers can reveal their novelty through clever words but not rely on unintelligible sentences to cover their confused ideas." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. It seems that using sobel operator is more like a emperical design, have you ever tried differen operators?\n2. I am curious about the performance of the proposed method when attack patch with $L_p$ constraint.\n3. Is that possible to make the dynamic scan of the patches and use more similar patch to replace but not just neighbour patches?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The method analysis is clear and understandable.\n2. The visualization is helpful.\n3. The proposed method is simple but works well and maintain high clean accuracy with minimal reduction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel adaptive defense framework -- NeighborViT which can detects and categorizes different types of attacks; applies tailored defense mechanisms for each attack type and leverages information from neighboring patches for effective detection and defense." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Lack of ablation study about different operators. \n2. Lack of experiments about different attack strength.\n3. Although the result shows the proposed method does not bring a lot extra computation cost, the detection and defense mechanisms can be further improved.\n4. Bar graphs can be further improved (color, layout)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Since the proposed ENED relies on the detection results of AD, if naturalistic adversarial patches and other non-noise forms of adversarial patches are introduced for attacks on ViT later on, would the entire method become ineffective? In other words, does the low pixel continuity assumption hold true for these patches (e.g., TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems) and (e.g., Generating transferable adversarial examples against vision transformers)?\n\nIn Table 1, the performance improvement of this method seems to be only around 2% compared to the secondary defense methods under most settings. Furthermore, Jedi can support multitasking (such as object detection) and various attacks (like naturalistic adversarial patches), suggesting that this method may have more scenario limitations.\n\nSection 4.4 devotes a significant amount of space to discussing the values of hyperparameters. I am concerned that the impact of these hyperparameters may be too substantial. Further, do the hyperparameters need to be adjusted for different ViT architectures and datasets, and if so, how should this be approached?\n\nThe experiment does not specify the ratio of catastrophic attacks to non-catastrophic attacks, and it does not clearly differentiate which category each attack method belongs to.\n\nFor other suggestions or questions, please refer to weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper introduces an adaptive defense strategy that distinguishes between different types of adversarial attacks, providing tailored solutions, which successfully enhances robust accuracy while preserving clean accuracy, addressing a common trade-off in existing methods.\nThe authors provide extensive experimental results across multiple ViT models and attack approaches, demonstrating the framework's robustness.\nThe authors proposes a lightweight adversarial patch detection method that doesn’t require auxiliary models or multiple queries, reducing computational costs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces NeighborViT, an adaptive defense framework for Vision Transformers (ViTs) against adversarial patch attacks. Existing defense methods sacrifice clean accuracy or achieve suboptimal robustness. NeighborViT detects and categorizes different types of attacks, applying tailored defense mechanisms. The framework leverages information from neighboring patches to enhance robust accuracy without compromising clean accuracy. Experimental results demonstrate its effectiveness on various ViT models using the ImageNet dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The experiments focus on specific attack types. Including a broader range of attacks could enhance the robustness of the evaluation.\nSome technical aspects, such as the definition of similarity in Section 3.2, parameter cl in table 10, could be explained in greater detail. \nThe proposed strategy may require fine-tuning for different datasets or attack scenarios, potentially affecting generalizability.\nThe paper lacks discussions of the limitations of the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The authors should consider adaptively tuning hyperparameters to enhance the method’s generalization.\n\nAttack types are classified based on the location of adversarial patches, but could a more fine-grained classification further enhance detection and defense effectiveness?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The authors observe that different types of attacks require tailored defense approaches, classifying them as non-catastrophic and catastrophic based on the location of adversarial patches. This observation, validated by Table 1, demonstrates a logical approach that improves defense performance.\n2. The authors conduct extensive experiments, which show better performance compared to existing defense." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates defense strategies against adversarial patch attacks aimed at Vision Transformers. The proposed method, NeighborViT, encompasses attack detection, classification of various attack types, and corresponding mitigation strategies. Specifically, the authors employ the sober operator with dynamic windows to pinpoint the locations of adversarial patches and use average predictions from the reconstruction of these patches to classify the type of attack. When adversarial patches are located in non-essential areas, they select a neighboring patch to fill the masked adversarial patches. In contrast, for essential areas, they reweight the attention weights for adversarial tokens. They conduct experiments on several ViTs to demonstrate the superior performance of their method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The authors delve into adversarial defense within the filed of ViTs. the proposed defense methods present challenges when applied to defending CNNs. This limitation restricts the technological contribution and general application of the paper.\n\n2. The proposed method includes three hyperparameters. While the authors present a wide range of values for these parameters, allowing the method to surpass other defenses. However, there remains some limitation, as shown in Table 6. The optimal value of $gamma$ varies with different attack patch sizes.\n\nThese two weaknesses pose challenges to the practical application of the proposed adaptive method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024an,\ntitle={An Adaptive Defense Against Adversarial Patch Attacks For Vision Transformers},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vOSwtXGSA2},\nnote={under review}\n}" }, "abstract": { "value": "Vision Transformers (ViTs) have become the prominent architecture for various computer vision tasks due to their superior ability to capture long-range dependencies through the self-attention mechanism. However, recent research indicates that ViTs are highly susceptible to carefully crafted adversarial patch attacks, presenting a significant challenge for practical deployment, particularly in security-critical applications. Existing approaches towards robust ViT frameworks often sacrifice clean accuracy and/or achieve suboptimal robustness, likely due to their uniform handling of diverse input samples. In this paper, we present NeighborViT, a novel adaptive defense framework specifically designed to counter adversarial patch attacks for ViTs. NeighborViT stands out by detecting and categorizing different types of attacks on inputs and applying adaptive, tailored defense mechanisms for each type of attack. To realize effective attack detection, categorization, and mitigation, NeighborViT explores the information in neighbor patches of the target patch and strategically employs them for defense. Our experimental results on the ImageNet dataset using various state-of-the-art ViT models demonstrate that NeighborViT significantly enhances robust accuracy without compromising clean accuracy. Our code is available at https://anonymous.4open.science/r/NeighborViT-8255." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "vision transformer; adversarial patch attack; adptive defense" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/307fe3e7d6dfdd4f3b04bb346f9d7563c3e145ee.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "An Adaptive Defense Against Adversarial Patch Attacks For Vision Transformers" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vOfDGYGVyj
Sparse Mamba: Reinforcing Controllability In Structural State Space Models
main
Active
Mamba;state space models;natural language processing
foundation or frontier models, including LLMs
1;3;3;3
5;5;3;5
1;1;3;2
1;2;2;1
2;1;3;1
2.5
4.5
1.75
1.5
1.75
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Could you elaborate on why controllability and observability are beneficial properties for a linear SSM layer with input-varying dynamics in a neural network? Specifically, how do could these properties improve the model's ability to learn and generalize?\n\n2. Given that the scan operation becomes more computationally expensive with non-diagonal matrices, how does your implementation handle this trade-off? Could you provide complexity analysis comparing your approach to vanilla Mamba/Mamba2?\n\n3. The parameter reduction achieved is relatively small compared to the total parameter count. Could you explain why this reduction is meaningful and how it affects the model's practical performance?\n\n4. Could you provide statistical significance tests for the reported improvements in perplexity and training time? How consistent are these improvements across different random seeds and training runs?\n\n5. How does your approach ensure stability of the learned SSM, and what impact does the canonical form constraint have on the model's expressivity?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper makes an interesting connection between classical control theory concepts and modern state space models for machine learning, attempting to bring established theoretical frameworks to bear on neural architecture design.\n\n* The experimental evaluation is conducted across multiple datasets of varying sizes and domains, providing some evidence for the generalizability of the approach.\n* The reduction in parameter count while maintaining or improving performance is a potentially valuable contribution, as it could lead to more efficient models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Sparse Mamba (S-Mamba), a modification of the Mamba architecture that incorporates controllability and observability structures to the pre input-varying discretization of the SSM. The authors propose two variants: Sparse Controllable Mamba (SC-Mamba) and Sparse Observable Mamba (SO-Mamba), which modify the structure of the state space matrices $A$, $B$, $C$ of the underlying continuous-time system to enforce specific canonical forms. The authors claim improvements in perplexity (5%), training time (3%), and parameter count compared to vanilla Mamba, demonstrating these results across four datasets: CodeParrot, OpenWebText, ArXiv, and Cosmopedia." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Technical Rigor and Motivation:\n- The paper fails to justify why controllability and observability are desirable properties for an SSM layer in a neural network context. The authors state these properties make analysis easier, but don't explain why such analysis is necessary or beneficial for the learning task.\n- The claim of \"less complexity\" is repeated without proper theoretical or empirical justification.\n- There is no discussion of how BIBO (*bounded-input, bounded-output*) stability is maintained in the proposed architecture. This is notoriously necessary to be enforced.\n\n2. Novelty and Literature Review:\n- The paper does not acknowledge or cite previous work on using companion canonical forms in SSMs [1, 2], making it difficult to assess the novelty of the contribution.\n- The motivation for making the system \"sparse\" is not well-explained, particularly given that the reduction in parameters (∼100K out of 64M) is relatively minor.\n\n3. Methodology and Results:\n- The paper doesn't discuss the computational implications of using non-diagonal state matrices with scan operations, which could significantly impact practical performance.\n- There is no analysis of the statistical significance of the reported improvements in perplexity and training time.\n- The experimental section lacks details about the inference procedure and comparison of different inference algorithms.\n\n4. Presentation:\n- The paper contains numerous issues with notation consistency (e.g., use of bold letters).\n- The writing quality needs significant improvement, with unclear explanations and imprecise language throughout.\n- Referenced materials are not properly used (e.g., where does the Krylov function appears in Krylov 1931?).\n\n[1] Zhang, Michael, et al. \"Effectively modeling time series with simple discrete state spaces.\" (2023).\n\n[2] Parnichkun, Rom N., et al. \"State-Free Inference of State-Space Models: The Transfer Function Approach.\" (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Where does the parameter reduction come from?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper does a good job outlining the proposed changes, where they come from, and the mathematical foundations of why these changes might be considered." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper aims to add controllability and observability from control theory into the Mamba SSM architecture with the hope to improve model efficiency. The model applies this to the state space matrices of the model and demonstrates these across several different corpuses." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper's results are extremely modest. The improvement of 3% and 5% are within the margin of error of implementations, slight training settings differences, etc... and confidence intervals are not presented, so it's not even clear this brings a benefit. The reduction in parameter count is also extremely small, 100k parameters out of 64M. It's possible the controllability and observability changes have other benefits, but there are no experiments to demonstrate this. For example, if the observability makes things more explainable, that needs to be demonstrated qualitatively and quantitatively. Additionally, the choice of datasets to evaluate are somewhat non-standard, and so its unclear why these in particular were chosen, meaning, as in perhaps they were cherry picked.\n\nThe paper needs more experimental results, more qualitative evaluation, and needs to demonstrate a stronger reason for this proposed change beyond the marginal speed improvements. As a final remark, I'm not even quite sure where the reduction in parameters comes from, as the original state matrix is sparse also only has N parameter values, even though the shape is NxN." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "The entire method description is unfortunately too vague that I cannot evaluate the technical content of the submission." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- Borrowing classical ideas from prior statistical or control work on state-based models is always a great direction to continue understanding and improving them.\n- The particular form of structured matrices used and the ideas of controllability and observability make sense." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a modification to the recent state space model (SSM) Mamba. Instead of the diagonal transition matrices of previous structured SSMs such as S4 and Mamba, this paper borrows ideas from classical control theory, in particular that of controllability and observability, to propose a different class of structure transition matrices $A$: more precisely, where it has the form of a companion matrix or transposed companion matrix." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Method**\n\nThe method details are very not clear. For example, one crucial detail in Mamba is that the $B$, $C$, and $A$ matrices all depend on the input. In this method, I cannot figure out if a still depends on the input (i.e. if the vector $(a_0, \\dots, a_n)$ of coefficients can vary per timestep of the input sequence). Additionally, no algorithm is provided to compute the model. The exact distinctions between Mamba(-2) should be made more clear.\n\n\n**Writing**\n- Background reads more like a survey; there are 4 full pages of background out of an 8 page paper\n- Numerous typos and formatting issues throughout\n- Main figure (Figure 1) is very hard to understand and there is little description\n\nAdditionally, SpaceTime [1] should be mentioned as they were the first to use companion matrices in structured SSMs.\n\n[1] Zhang et al. \"Effectively Modeling Time Series with Simple Discrete State Spaces\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The statement in the abstract, \"However, current Mamba models lack reinforcement of controllability in state-space equations for computing the A, B, C, and D matrices at each time step, leading to increased complexity and computational costs,\" is not substantiated within the paper. It would be helpful if the authors provided further explanation or justification for this claim to avoid overstatement. Clarifying this point would strengthen the argument and align the abstract more closely with the content of the paper." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The paper provides a detailed introduction to HiPPO, LSSL, S4, and Mamba. However, it would be more appropriate to move this detailed background information to the appendix, as readers/reviewers are likely already familiar with these works. A single paragraph summarizing the evolution of state-space models would suffice in the main text. Additionally, these sections seem to have limited relevance to the core contribution (if any) of the paper. Shifting them to the appendix would allow for greater emphasis on the key ideas and novel aspects." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a study of Sparse Mamba but spends an excessive amount of space summarizing related works, such as HiPPO, LSSL, S4, and Mamba. The actual contribution of the paper seems limited to less than four pages, while the first four pages largely repeat existing material rather than introducing new insights. This raises concerns about the novelty and depth of the contribution.\n\nFurthermore, while the paper claims to focus on sparsity, it also discusses controllability and observability, which seems tangential to the central topic. It is unusual to assess the performance of a sparse Mamba model by showing improvements in perplexity. If perplexity has indeed improved, it is unlikely to be a result of the model’s sparsity, as reducing model space typically does not enhance predictive performance. This disconnect needs clarification, as it raises questions about the actual source of the performance gains." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The improvement demonstrated in the paper is not significant, and it does not seem to stem from the theoretical concepts related to controllability, reachability, or observability. In fact, if the model used is Mamba, the element-wise gating mechanism should already ensure controllability. Therefore, it remains unclear what specifically accounts for the performance improvement of Sparse Mamba over the vanilla Mamba. A deeper explanation or analysis of the source of this improvement would be necessary to clarify its contribution.\n2. This paper spends an excessive amount of space summarizing related works, such as HiPPO, LSSL, S4, and Mamba. The actual contribution of the paper seems limited to less than four pages, while the first four pages largely repeat existing material rather than introducing new insights. This raises concerns about the novelty and depth of the contribution. I would recommend condensing the background into a single paragraph summarizing the evolution of state-space models in the main text, while moving the detailed explanations to the appendix." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Sparse Mamba" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024sparse,\ntitle={Sparse Mamba: Reinforcing Controllability In Structural State Space Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vOfDGYGVyj},\nnote={under review}\n}" }, "abstract": { "value": "In this work, we introduce the concept of controllability and observability to the Mamba SSM's architecture in our Sparse-Mamba (S-Mamba) for natural language processing (NLP) applications. The structured state space model (SSM) development in recent studies, such as Mamba and Mamba2, outperformed and solved the computational inefficiency of transformers and large language models at small to medium scale. The Mamba SSMs architecture drops the need for attention layers or multilayer perception blocks in transformers. However, current Mamba models lack reinforcement of controllability in state-space equations for computing the $A$, $B$, $C$, and $D$ matrices at each time step, leading to increased complexity and computational costs. In this paper, we demonstrate a reduction of parameters in comparison to the first published Mamba and Mamba2. We showcase an improvement in perplexity by 5\\% and a decrease in training time by 3\\% after reinforcing controllability and observability on the original Mamba architecture in our proposed S-Mamba. The controllable $n \\times n$ state matrix $A$ is sparse and it has only $n$ free parameters. Our novel approach will ensure a controllable system which will be the gate key for Mamba3." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Mamba", "state space models", "natural language processing" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/56eae743dc6d789a0b97e323821a349c3bf415ff.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Sparse Mamba: Reinforcing Controllability In Structural State Space Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vPOMTkmSiu
Scaling Laws for Downstream Task Performance in Machine Translation
main
Active
scaling laws;transfer learning;machine translation;large language models;data valuation
foundation or frontier models, including LLMs
3;6;8;8;8
4;4;4;3;3
2;3;4;3;3
2;3;3;3;2
1;2;3;4;3
6.6
3.6
3
2.6
2.6
-0.583333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper conducts extensive experiments and present some interesting findings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes scaling laws for pre-training data and downstream data in the context of machine translation. The authors demonstrate that when the pre-training data is sufficiently aligned with the downstream task, both the downstream cross-entropy (CE) loss and translation quality adhere to the proposed scaling laws. However, if the pre-training data is not well-aligned, the downstream CE may still align with the scaling laws, but other metrics of translation quality may exhibit misalignment." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. \"Inaccurate\" statement: The field of machine translation is increasingly adopting decoder-only architectures, whereas this paper focuses exclusively on encoder-decoder models. This focus leads to findings that conflict with those from SoTA decoder-only translation models. For example, in lines 355-359, the authors claim that \"there is no need to pretrain the models when the fine-tuning data is large enough.\" This assertion totally contradicts the findings in [1], which states that \"you only need a large amount of data for pre-training and a small number of data for continued fine-tuning.\" The key difference is that [1] employs a decoder-only model but it achieves top performance among other SoTA translation models. I am inclined to agree with the latter explanation because your training and test data are from the same domain (WMT). Training on WMT data usually yields better results on the WMT test set, which might give the illusion that using only fine-tuning data is sufficient. However, this approach may lead to poor generalization abilities. A large training dataset may have less impact on in-domain test data, but it's essential to ensure that the model generalizes well to out-of-domain data. Therefore, the authors should also investigate the model's performance on other out-of-domain test datasets to provide a more comprehensive evaluation before make this statement.\n\n2. Out-of-date metric: Most of the findings in this paper rely on BLEU scores. However, BLEU is increasingly regarded as an inaccurate metric that does not align well with human judgments, especially when compared to other metrics like COMET. The inclusion of ROUGE also seems weird, as ROUGE is primarily used for summarization evaluation rather than translation. While it might be applicable in certain cases, it is not a mainstream choice for assessing translation quality. The authors should consider using more suitable metrics, such as BLEURT, to provide a more accurate evaluation of their models.\n\n3. Unclear writing: While this is a minor issue, the paper's writing feels disjointed and contains redundant information. There are several instances where the same points are reiterated, such as \"when the distribution is well-aligned, both metrics and CE align with the scaling laws.\" Moreover, the frequent references to figures that are located several pages away disrupt the reading flow and make it difficult to follow the arguments being presented.\n\n4. Lack of definition: I struggled to understand what the authors mean by the \"distribution of pre-training and downstream task.\" It wasn't until midway through the paper that I realized this likely refers to the amount of monolingual target language data included in the pre-training dataset—where more target language data equates to a more \"aligned distribution.\" One question here, if the authors were to conduct experiments on another low-resource language like Icelandic, where the fine-tuning data consists of a small amount of news content but the available pre-training monolingual data is primarily from games, the \"distribution/alignment\" becomes ambiguous. Alternatively, one might employ cross-lingual learning using a high-resource language like German, for which abundant news data is available. In this scenario, which dataset constitutes a \"more aligned distribution\"—the Icelandic data from a different domain or the German data from the same domain? I believe that the term \"distribution\" is too abstract and makes the paper challenging to understand. Providing a more precise definition or elaborating on this concept would greatly enhance the clarity of the paper.\n\n5. A narrow scope in high-resource languages: The focus on high-resource languages is not particularly compelling, as SoTA models already perform exceptionally well in these languages, regardless of scaling laws. People can simply amass large amounts of data for training to achieve excellent results. The paper's emphasis on utilizing billions of tokens for pre-training and millions of parallel sentences for fine-tuning is not practical for many mid- or low-resource languages, where those languages do not have the resource. A more interesting and valuable direction would be to explore scaling laws that examine how high-resource languages can be used to support low-resource languages. Investigating how data from other languages can enhance translation quality in languages with limited resources would address a critical need in the field of MT and contribute to more inclusive and effective translation models." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "see above" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper seeks empirical answers to important questions -- scaling laws (even rough ones) for downstream transfer can provide useful guidance: When should one seek out different pretraining data vs more pretraining data? How much finetuning data is sufficient for good downstream performance?\n\nThe paper focuses specifically on machine translation -- an important NLP task that has not been the main focus of prior transfer learning scaling law work. This is a strength. The application focus lets the paper go into more depth in empirical analysis and leads to clearer takeaways. \n\nThe paper concretely shows that downstream cross-entropy can be misleading. The paper finds that downstream cross-entropy monotonically improves with pretrain size, even when other machine translation metrics like BLEU do not. This is a useful result for future researchers." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Past work on scaling laws has mostly focused on pretraining loss rather than downstream transfer learning task performance. This paper attempts to establish scaling laws for transfer learning for machine translation. The paper specifically investigates how pretraining data size affects downstream translation performance, measured in terms of downstream cross-entropy loss as well as in terms of downstream machine translation metrics like BLEU. Results show that that size of finetuning data, as well as the similarity between the pretraining and finetuning datasets, have a large impact on downstream results and scaling behavior. Most interestingly, the results indicate that when pretrain and finetune are well enough aligned, there is a clear log scaling law for pretraining size on downstream machine translation metric performance. However, when pretrain and finetune are dissimilar, more pretraining data can sometimes even hurt performance on downstream machine translation metrics. This stands in contrast with downstream cross-entropy, which improves monotonically with pretrain size regardless of pretrain/finetune mismatch." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Aspects of the presentation could be improved. Specifically, as a reader, I found it confusing that the paper essentially starts with a results / takeaways section before discussing experimental details. Then an experimental details section comes next, followed by a results / takeaways section. I think a more traditional ordering could be more effective here -- the intro could be used to discuss high-level takeaways up front, leaving the rest of the detailed results to be described after experimental setup is clear. Without having a clear understanding of experimental setup, I was left wondering in many cases how to interpret the early results discussion. \n\nThe one experimental weakness that stood out to me: It's not entirely clear how \"alignment\" between pretrain and finetune is being defined. For the most part, it seems to mean in the context of this paper whether and to what extend that datasets share languages. This could be further clarified and formalized -- but, further, more nuanced measures of alignment could easily be reported. These might help clarify experimental takeaways having to do with alignment, which represent some of the more interesting results in this paper. \n\nMinor:\n\n-Discussion of results on other tasks is odd given that paper is focused on MT.\n\n-I couldn't follow remark 2 -- what was the main takeaway / hypothesis? Consider rephrasing this section for improved clarity. \n\n-\"3.4 A Guide for ...\" section is kind of confusing -- especially part 2." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "To what extent do you expect your findings to carry over to decoder-only models more commonly found in current LLMs?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- well written paper\n- addresses a problem that is relevant for a large community\n- thourough empirical analysis" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper analyses to what extent scaling laws apply if the pre-training and downstream data are well or less well aligned. They find that iff the data is less aligned, task-specific metrics can fluctuate when increasing the pre-training data, while increasing more monotonically if the data is well aligned. Their findings are mostly based on machine translation experiments but also generalize to more general tasks such as the tasks in the SuperGlue benchmark set." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although the concept of alignement is central to this work and many times the authors refer to \"degree of\" or \"not sufficient\" alignment, no definition of alignment is given. I understand that this is not trivial but I would appreciate at least an attempt to provide a---ideally formal---definition that could also be used to quantify alignment. Currently, it seems the only way is to determine the degree in hindsight (see lines 227--228) by failing to fit the scaling law." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "The alignment between the pre-training and downstream task is crucial to understanding the scaling law. I am not entirely clear on what “align” means in this context. Could you please explain in detail the meaning of “align” in this work?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper's main contribution is its in-depth investigation of the impact of pre-training data on machine translation tasks and the proposal of a new logarithmic law to describe the change in downstream performance as the size of the pre-training data increases. This research is helpful to understand the performance of large-scale language models in specific downstream tasks and provides a new method for evaluating the value of pre-training data. From the perspectives of innovation and importance, the paper is highly valuable. Additionally, the quality and clarity of the paper are good, with detailed descriptions of experimental design and result analysis, and abundant charts and empirical evidence to support conclusions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates the scaling behavior of downstream performance metrics, including translation scores and cross-entropy loss, as a function of the size of the pretraining dataset in machine translation tasks. The authors conduct experiments using T5 models trained on different subsets of the Multilingual C4 dataset and fine-tuned on various translation tasks. They find that when the pretraining and downstream tasks have well-aligned distributions, both translation scores and cross-entropy loss improve monotonically with more pretraining data. However, when the distributions are not well-aligned, translation scores can exhibit non-monotonic behavior, while cross-entropy loss still improves monotonically. The authors propose a practical guide for evaluating the value of pretraining data for translation tasks based on downstream translation scores and demonstrate the effectiveness of their approach on multiple natural language processing tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "One potential weakness of the paper is it is focusing solely on downstream performance changes in machine translation tasks. This limits the comprehensive evaluation of the applicability of the logarithmic law to a wider range of natural language processing tasks. \nFor NLP tasks that use precision and recall as metrics, can these be effectively aligned with the pre-training procedure? Additionally, can cross-entropy be generalized to observe the scaling law?\n\nIt is recommended to include a reference to the inference scaling law from OpenAI’s o1 model and to discuss the scaling law within a broader framework." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- This is in not a critique but I'm curious about your thoughts. You study pre-training with up to 2 languages. How do you think your findings translate to the more realistic setting of pre-training on a large number of languages, with severe misalignments in data size between the languages?\n\n- Similarly, do you have any findings related to language similarity?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- Exceptionally well-grounded in literature, with relevant prior work effectively woven throughout the paper's narrative.\n- Strong experimental methodology with fully controlled experiments that isolate different variables' effects. In contrast to a lot of recent LLM works, the experiments are fully controlled which I appreciate a lot. The paper demonstrates comprehensive validation across multiple metrics and scenarios, with clear empirical support for the proposed log-law.\n- The writing is clear and the structure effective. Clear and systematic presentation of results with thorough analysis.\n- The proposed log-law for translation quality metrics is well-motivated and comprehensively validated.\n- Makes important observations challenging common assumptions about cross-entropy and translation quality metrics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates scaling laws for downstream task performance in machine translation, specifically examining how pretraining dataset size affects translation quality after finetuning. The authors propose a novel log-law to describe the scaling behavior of translation quality metrics (BLEU, ROUGE, COMET) and demonstrate that the relationship between pretraining data size and downstream performance heavily depends on distribution alignment and finetuning dataset size. A key finding is that while cross-entropy consistently follows power-law scaling, translation quality metrics may deviate from expected scaling patterns when pretraining and downstream distributions are not well-aligned." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Limited to pairwise language combinations in pretraining - would benefit from discussion about generalization to realistic multilingual pretraining scenarios.\n\n- It would be a great addition to have a short discussion that connects these findings to recent successes in LLM-based MT. For instance, some works like https://aclanthology.org/2024.tacl-1.32/ and https://aclanthology.org/2024.acl-long.336/ fine-tuned directly on parallel data, but these LLMs are heavily English-centric and therefore there is a low distribution alignment. More recent work like ALMA https://openreview.net/forum?id=farT6XXntP and TowerLLM https://openreview.net/forum?id=EHPns3hVkj#discussion circumvent this issue by continuing pre-training on multilingual data before fine-tuning on parallel data.\n\n- Minor issues\n - Lines 142-143: \"Tran et al., 2019\" is repeated 3 times\n - Lines 264-265: it'd be nice to include fine-tuning dataset sizes here" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We study the scaling behavior of the downstream translation metrics as the pretraining data grows and propose scaling laws for COMET and BLEU scores, and downstream cross-entropy." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024scaling,\ntitle={Scaling Laws for Downstream Task Performance in Machine Translation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vPOMTkmSiu},\nnote={under review}\n}" }, "abstract": { "value": "Scaling laws provide important insights that can guide the design of large language models (LLMs). Existing work has primarily focused on studying scaling laws for pretraining (upstream) loss. However, in transfer learning settings, in which LLMs are pretrained on an unsupervised dataset and then finetuned on a downstream task, we often also care about the downstream performance. In this work, we study the scaling behavior in a transfer learning setting, where LLMs are finetuned for machine translation tasks. Specifically, we investigate how the choice of the \\emph{pretraining} data and its size affect downstream performance (translation quality) as judged by: downstream cross-entropy and translation quality metrics such as BLEU and COMET scores. Our experiments indicate that the size of the finetuning dataset and the distribution alignment between the pretraining and downstream data significantly influence the scaling behavior. With sufficient alignment, both downstream cross-entropy and translation quality scores improve monotonically with more pretraining data. In such cases, we show that it is possible to predict the downstream translation quality metrics with good accuracy using a log-law. However, there are cases where moderate misalignment causes the downstream translation scores to fluctuate or get worse with more pretraining, whereas downstream cross-entropy monotonically improves. By analyzing these, we provide new practical insights for choosing appropriate pretraining data." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "scaling laws", "transfer learning", "machine translation", "large language models", "data valuation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9448dad00185335bfa9d846dc8514bbca030e99a.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Scaling Laws for Downstream Task Performance in Machine Translation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vQ0zFYJaMo
Your Task May Vary: A Systematic Understanding of Alignment and Safety Degradation when Fine-tuning LLMs
main
Active
safety alignment;task similarity;guardrail durability
alignment, fairness, safety, privacy, and societal considerations
3;5;5
3;4;4
2;3;2
2;2;3
3;3;3
4.333333
3.666667
2.333333
2.333333
3
1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Could the authors consider using data contamination detection procedures in order to reduce the conjecturality about section 3.1?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Originality: the paper studies the impact of dataset similarity across multiple stages of training as a possible aspect that could impact the effectiveness of safety guardrails in fine-tuning. I don't know other papers focusing on this aspect.\n- Clarity: the paper is clear and well written, with nice diagrams that illustrate the concepts easily. \n- Quality: several empirical aspects limit the scope and validity of the analysis\n- Significance: the paper is extremely relevant for the community. \n\n- The proposed procedure is interesting and could potentially be very useful if the authors demonstrated the generalisability of its assumptions (see weaknesses)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors study the relationships between data diversity across different training stages of LLMs and its impact on the ability to weaken safety guardrails. The authors perform experiments on llama 3 architecture and conclude that 1) keeping the training data private may reduce the ability of an attacker to jailbreak the models, 2) higher diversity may also reduce the attacker's ability to jailbreak the models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The conclusion that better safety can be achieved by obscurity about the training data, although very practical, is not a typical recommendation the security community would make. \n- The generalizability of the paper's conclusions is unclear, since the work focuses on very restricted model architectures. The claims need further empirical evidence, it could be good if the authors could include in the rebuttal. Both to study the phenomenon for different architectures and model sizes \n- It is unclear if the conclusions drawn depend on the choice of the jailbreaks. Indeed, an extensive suite of multiple jailbreak benchmarks should be used in order to draw solid conclusions. Furthermore, one would also need to present some adaptive forms of jailbreak generation that do account for the differences in training between models. While even this would not guarantee the absolute generalizability of the conclusions (which would remain a core limitation), it would make a much stronger empirical point. \n- the statements also only hold for the considered guardrails, and it's unclear how well they generalise to other forms of guardrails. \n\nI am happy to increase my score if the points above are well addressed with experiments and evidence." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In table 1, the clustering algorithm is k-means. As we know k-means contains some randomness, are the results in table 1 the average of several runs?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper provides a new insight about the fragility of safety guardrails, which stems from the high similarity between upstream and downstream tasks.\n2. Instead of using constraint loss functions, this paper proposes a new direction: data-based approach for fine-tuning attacks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates how to make LLMs' safety guardrails more durable after downstream fine-tuning. The authors find that safety guardrails are more fragile when the upstream and downstream datasets are similar but more durable when they are dissimilar and diverse." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The real-world applicability of the findings is questionable. In the experiments, the authors have access to the complete downstream data, yet the safety improvement is modest. According to Table 2, the GPT score is about 0.1 - 0.4 lower than random selection, and the GPT ASR is around 5% lower. This gap is likely to be smaller in real-world scenarios, where downstream data from attackers are not accessible.\n2. The paper concludes that low-similarity datasets are more diverse than high-similarity ones, which is intuitively understandable and not a particularly novel or interesting insight. Moreover, the paper does not provide new methods or insights on how to obtain a more diverse safety-alignment dataset.\n3. The authors should include constraint loss based baselines in their experiments to better demonstrate the effectiveness of data similarity in defending against fine-tuning attacks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "n/a" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In Section 4.2, the paper shows that low-similarity data is more diverse than high-similarity data. Why is this always the case? The diversity only concerns the upstream data, while the similarity measure involves both the upstream and downstream datasets." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper shows that a malicious actor can exploit the similarity of upstream and downstream datasets to weaken the safety guardrails. Hence, it is crucial to protect the privacy and improve the diversity of the upstream dataset. This is a meaningful observation with important real-world implications for improving LLM safety." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies how upstream datasets affect the durability of safety guardrails during downstream fine-tuning in large language models. The authors conjecture that safety guardrails are more durable when the upstream dataset is more diverse and less similar than the downstream dataset, which is verified using the LLAMA2-7B-CHAT model under various datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The observation regarding how improving the privacy and diversity of the upstream dataset can help with safety is somewhat expected and has already been recognized in the literature to a great extent. The metrics of similarity and diversity used in the paper are both adapted from recent work. For example, it was observed in He et al. (2024) that selecting benign examples that are most similar to known harmful data can improve the attack success rate significantly. The idea of protecting the privacy of training data to hinder adversarial manipulation is also well known in security and adversarial machine learning communities. \n\nThe evaluation is far from being comprehensive. In addition to the several limitations already mentioned in the paper, including using a single LLM architecture of fixed size, one important weakness is that when evaluating the impact of similarity between upstream and downstream tasks, only the cluster of list-format data is considered. Further, the paper does not consider any defense against downstream fine-tuning attacks. It is unclear if similar results or new insights can be obtained when considering other clusters of similar data or downstream tasks with some defense applied." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024your,\ntitle={Your Task May Vary: A Systematic Understanding of Alignment and Safety Degradation when Fine-tuning {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vQ0zFYJaMo},\nnote={under review}\n}" }, "abstract": { "value": "Through supervised fine-tuning or reinforcement learning with human feedback, large language models can achieve a certain level of safety alignment during instruction fine-tuning. However, these *safety guardrails* are often fragile, as models can easily generate harmful content after downstream fine-tuning. Although various methods have been proposed to mitigate this, our paper shifts focus to the durability of safety guardrails, beginning with their formation in the upstream alignment stages. The central question we explore is: *Can we construct more durable safety guardrails for specific downstream tasks to ensure models remain safe after fine-tuning?* Our experiments demonstrate that the durability of these safety guardrails is closely tied to the similarity between upstream and downstream datasets: higher similarity results in more fragile guardrails after fine-tuning, whereas lower similarity results in more durable guardrails. This finding highlights the importance of dataset diversity and privacy in upstream alignment data. Ensuring the diversity of the alignment dataset, which allows downstream datasets to be less similar to it, enhances the guardrail durability for fine-tuning. Maintaining its privacy prevents the exposure of alignment data that adversaries could exploit. Thus, we advocate for a dual strategy: prioritizing both the privacy and diversity of upstream alignment datasets to fortify safety guardrails against potential threats, ensuring long-term model robustness in real-world applications." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "safety alignment", "task similarity", "guardrail durability" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4b6b4a7eee3fe6210dcfbe9a0b66b89b2dc5aa1d.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Your Task May Vary: A Systematic Understanding of Alignment and Safety Degradation when Fine-tuning LLMs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vQ1y086Kn2
UnrealCV Zoo: Enriching Photo-realistic Virtual Worlds for Embodied AI Agents
main
Active
Virtual worlds; Embodied AI; Embodied Tracking and Navigation; Visual RL;
datasets and benchmarks
3;5;5;5
4;2;4;4
2;2;3;3
3;2;2;2
1;3;3;3
4.5
3.5
2.5
2.25
2.5
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How hard are the given tasks for current RL algorithms? With some limited compute, is it possible for RL algorithms to achieve the same performance as a human expert today? These environments can drive downstream RL innovation by just simply being impossible to become experts in today.\n\n2. How long does it take for an RL agent to become an expert in these environments? Another interesting problem to explore in RL is becoming an expert fast. If it takes unreasonably long to become an expert in an environment, then these environments become interesting to share with the community." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Originality: The paper demonstrates originality by contributing large scale diverse environments that can't be found for training RL agents today. Performance of RL agents on such environments from different playable entities hasn't been studied in detail before and having such environments would allow for that.\nQuality: Inclusion of images, algorithmic benchmarks, and comparison tables make the paper comprehensive \nClarity: Paper is well written and clear. Details make it clear that the environment will be easy for the community to leverage for training and benchmarking.\nSignificance: With the integration of OpenAI Gym and ability to use this without expertise in Unreal, this work could allow for significant downstream RL benchmarking and explorations that could lead to interesting insights." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces UnrealZoo, a collection of 100 3D environments based on top of the UnrealCV engine. This collection of environments is novel since it spans a variety of scales, agent types, has a rich navigation system and is multi agent." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the types of environments included in UnrealZoo are diverse, there is little diversity in the types of interactions and actions the agents can have in these environments. This limits how complicated these environments can be and how much novelty these environments will drive in terms of RL algorithms explored for learning and becoming experts. \n\nIt is a little hard to understand what the \"ceiling\" for each task is in terms of performance. How hard are the given tasks for current RL algorithms? How long does it take for them to learn and become experts? \n\nInclusion of results for only DowntownWest feels lacking. Would like to see what performance looks like for other environments too." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The visuals from the proposed simulator look great.\n\nThere are a variety of task and environments. Environments also have a wide variety of scales.\n\nThe authors improved the performance of the rendering pipeline.\n\nThe environment supports multiple agents." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed UnrealZoo, a photorealistic and large-scale environment for embodied AI agents. In UnrealZoo, agents can perform much more complicated actions than just traditional navigation, such as jumping and climbing. Further, agents can control vehicles, humanoids, animals, etc, allowing experimentation with different embodiments.\n\nThe authors propose and instantiate various tasks in their proposed simulator. They evaluate both VLMs and RL-trained agents on a subset of their proposed tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My biggest concern with this paper is that the proposed tasks and simulator lack a defined direction.\n\nThere has been a considerable amount of interest centered around indoor navigation with the goal of sim2real transfer. Indoor navigation has been the subject of focus due to real robots that can serve as a deployment target existing, albeit it far from perfect hardware. The reviewer is unaware of any existing hardware platforms that would be sensible deployment targets.\n\nThere is also considerable interest in environments to evaluate new reinforcement learning methods and algorithms. While there are multiple criteria for these environments, one key one is speed -- the environment itself must be very performant as the ability to quickly iterate on new ideas is key. UnrealZoo is unfortunately very slow by these standards.\n\nWhile I do find the proposed simulator, tasks, and environments interesting, I am concerned about the value of the contribution." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- **Q1:** The paper includes experiments on the cross-embodiment generalization capabilities of some offline RL, however, it does not motivate cross-embodiment generalization per se, which I find non-trivial. What is the motivation behind cross-embodiment? Why is cross-embodiment generalization an interesting capability for embodied agents? Is there any practical or real-life scenario where cross-embodiment generalization is relevant? \n\n- **Q2:** What happens when the agent's implementation can not handle the control frequency specified by the environment? Are the actions repeated or no action is taken (something like no-op in some RL environments)? Is it possible to freeze the environment to wait for the action of the agent?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper introduces realistic, highly complex open-world scenarios that resemble real-life scenarios embodied agents would face in the real world. UnrealZoo fills the gap between open-world environments that are far from the complexity of real-life scenarios (e.g., MineDojo, NetHack Learning Environment, or Craftax) and realistic environments that are usually far from the vast scenarios typically encountered in real life (e.g., \nThreeDWorld or Habitat 3).\n\n- UnrealZoo provides 100 scenes of very different natures and many playable entities (humans, animals, cars, robots, etc.). Although realistic virtual environments exist for embodied AI research, these often focus on some specific domain (e.g., CARLA and autonomous driving). The variety of the UnrealZoo environments makes the proposed framework a very versatile tool for embodied AI research.\n\n- Although tools exist for creating environments in the Unreal game engine and libraries for computer vision (UnrealCV), the authors modify UnrealCV to suit the needs of embodied agent research. Moreover, they make some modifications to the agent-environment communication and the rendering pipeline improving the performance (frames per second) of the environments significantly. Finally, the authors provide an easy-to-use but versatile API based on OpenAI Gym." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces UnrealZoo, a collection of photorealistic environments based on the Unreal game engine. These environments serve as a platform for training and analyzing embodied agents in highly realistic virtual environments. Moreover, unlike previous works that limit to small scenarios (e.g., a kitchen or a room), UnrealZoo provides vast virtual environments with carefully designed assets that resemble the challenges that agents operating in the real world would face. UnrealZoo is based on UnrealCV, which authors have modified to suit the needs of embodied agent research, improving its usability and performance (FPS). Experiments demonstrate the usability of UnrealZoo for embodied agent research, showcasing applications such as visual navigation, social tracking, and more. Results highlight the importance of diversity in the training environments for offline RL methods, how these fail to generalize across embodiments, and show the limitations of VLM models in highly complex realistic environments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although I think that the main contribution of this work (UnrealZoo) could be of great relevance to the embodied AI field, I have two major concerns that I strongly believe should be addressed before:\n\n**Concern 1:** In lines 185-187 authors state that the environments are sourced (and paid) from the Unreal Engine Marketplace (now renamed as Fab, see https://www.fab.com/), and in lines 236-238 they mention: *\"[...] we will package the projects and release binaries for the community.\"*. However, the standard license of Fab states that (verbatim from section 5.a of https://www.fab.com/eula): *\"Under a Standard License, you may not Distribute Content on a standalone basis to third parties [...]\"*. Please **make sure** that distributing the binaries as mentioned in the paper is legal under the terms in which the assets were purchased. \n\nMoreover, if the binary distribution of the environments is legal, the fact that this has to be shipped in a binary package greatly limits the open-source nature of the project. For example, if a contributor or developer wants to modify existing UnrealZoo environments, they would be greatly limited by the binary format of the Unreal environment. I'm open to discussion and willing to read the responses of the authors in this regard. \n\n**Concern 2:** I strongly believe that the overall presentation and writing quality of the paper should be improved. Examples:\n\n- Lines 123 to 142 discuss previous literature on virtual environments. This text uses the \"virtual environments\" term which is very generic, but only discusses environments based on realistic simulators. Please modify the text to explicitly refer to **only** realistic simulator-based environments, or include the vast literature on environments that don't simulate the real world (e.g., NetHack, MineDojo, MineRL, Craftax, Atari, VizDoom,...). \n\n- Missing details in the experimentation (Section 4). The authors do not provide any detail (or reference) on the specific methods used for the experiments. For example, the online and offline RL methods are unknown, there is no mention of the NN architectures employed. Moreover, encourage the authors to include the training curves of the methods (at least in the appendix). Currently, there is no evidence of the level of convergence of the methods. Furthermore, there is no information on how the human benchmark was collected: how many humans have been used? Were the humans informed on the task to solve or only had access to reward values? Were the humans experienced in video games? \n\n- In L402 authors define the \"Average Accumulated Reward\" metric. I believe the authors refer to the average episodic return typically employed in RL. If this is the case, I think that common terms are much preferable instead of introducing new terminology.\n\n- Section in the appendix should be listed with letters, not with numbers. For example, the first section of the appendix should not be \"7 Data\", but \"A Data\". I believe this could be fixed by including the \"\\appendix\" command in LaTeX. Moreover, the appendix sections include many of the minor issues from the main text (some are pointed out below). Please consider also proofreading the appendix sections. \n\n- The references section should be carefully checked and improved. These are some of the issues, but please check all the references:\n + Some papers referenced as preprints have been published in conferences or journals. For example, the CARLA paper was published in CoRL 2017 but is listed as an arXiv preprint.\n + Incorrect capitalization in many titles. Example: CARLA in Dosovitskiy et al. 2017 should be all uppercase. \n + Some names have been abbreviated, for example: in Gaidon et al. 2016. The same reference is missing the full name of the conference. \n + Gupta et al. 2017 and many other references have inconsistent use of capitalization.\n + Some references include the acronym of the conference and others don't.\n + Inconsistent format of the proceedings name, for example: some include the year, some don't.\n + Some references are included as @article when should be @inproceedings. Example: Yuan et al. 2020. \n\n**Minor issues:**\n\nI found many minor writing and presentation issues in the paper, in the following I list some of them. The issues are individually minor, but many minor minor issues add a major issue. Please consider an exhaustive revision of the full paper. \n\n- When listing the contributions (last part of the intro) the numbers jump from 1) to 3), missing 2).\n- The caption of Table 1 is missing the final dot. Moreover, the employed icons should be explained in detail somewhere in the main text or the appendix.\n- L197: \"[...]Project Website [...]\" should be lowercase.\n- L309: \"To be user-friendly [...]\"; L311: \"In this way, the beginner can easily use and customize [...]\", reformulate to maintain formalism. \n- L349: \"visual Navigation tasks [...]\" fix capitalization.\n- L350 and L351 have missing whitespace before and after parenthesis.\n- L368: \"In Table. 4.1\". Extra dot after Table and there's no Table 4.1, I believe the authors refer to Table 3.\n- Table 3 does not define what the numbers on the table represent. I believe these are EL and SR, but these should be explicitly mentioned.\n- L413: missing whitespace before parenthesis.\n- L414: extra dot inserted after \"Figure\".\n- L428: \"[...] are provided in the appendix.\", please specify the appendix section.\n- L431: extra dot and whitespace inserted after \"Figure\". \n\nI want to emphasize that I believe that the contribution of the work is relevant to the field, thus, I'm willing to update my score if the authors address the mentioned concerns and issues." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "There may be copyright problem of using scenes in the market." }, "flag_for_ethics_review": { "value": [ "Yes, Legal compliance (e.g., GDPR, copyright, terms of use)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- About the choice of base engine, is UE the only engine that can render photo-realisitic images?\n- There is a lot of Agent body mentioned in table 1. Are they driven by pre-defined policy? What are their action spaces?\n- Cannot find Table 3.3 in line 308 and Table 4.1 in line 368. `GPT4-o `in table 5 should be `GPT-4o`.\n- Why GPT-4o performs so poorly? Do you do the error breakdown analysis? Can you use other information like position or third-person view to increase the success rate?\n- In social tracking experiment, as the training set grows, the growth of SR is intuitive. Can you conduct the experiment on same scale training (compared to the less diverse data) to give a fair evaluation?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The environment is photo-relaisitic. \n- Scene quality an is good. Entities are diverse. Simulation efficiency is improved.\n- There are a lot of experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a photo-realistic simulation environment for embodied training and a scene dataset of 100 diverse scenes.\nThe simulator is based on UnrealCV develops interfaces for RL training, and supports multi-agent.\nThe simulator highlights its simulation speed and diversity of scenes and entities. Necessary experiments are conducted to show its usability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- There are only 100 scenes and they are not scalable. \n- There is no comparison with more recent related works like [1][2].\n- The motivation is not clear in introduction. What's the disadvantages of previous simulators? How does UNREALZOO settle them?\n- The interaction between agents and the world is not discussed, which is important for embodied training.\n- There are no language-modal input/communication among agents. The multi-agent feature is not highlighted.\n\n\n[1] Cheng, Zhili et al. “LEGENT: Open Platform for Embodied Agents.” ArXiv abs/2404.18243 (2024): n. pag.\n\n[2] Yang, J., Ding, R., Brown, E., Qi, X., & Xie, S. (2024). V-irl: Grounding virtual intelligence in real life. arXiv preprint arXiv:2402.03310." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A collection of photo-realistic 3D environments for benchmarking embodied AI agents." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024unrealcv,\ntitle={Unreal{CV} Zoo: Enriching Photo-realistic Virtual Worlds for Embodied {AI} Agents},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vQ1y086Kn2},\nnote={under review}\n}" }, "abstract": { "value": "The embodied artificial intelligence agents should be capable of sensing, reasoning, planning, and acting in complex open worlds, which are unstructured, high-dynamic, and uncertain. To apply agents in the real world, the realism of the simulated worlds is important for training and evaluating the built agents. This paper introduces UnrealZoo, a rich collection of photo-realistic 3D environments that mimic the complexity and variability of the real world based on Unreal Engine. For embodied AI, we provide a diverse array of playable entities in the environments and a suite of tools, based on UnrealCV, for data collection, reinforcement learning, and evaluation. In the experiments, we benchmark the agent on visual navigation and tracking, two fundamental tasks for embodied vision agents, in complex open worlds. The results provide valuable insights into the strengths of enriching the diversity of the training environments and the challenges to current embodied vision agents in the open worlds, e.g., the latency in the closed-loop control to interact with the dynamic objects, reason the accordance of the spatial structure in the complex scenes." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Virtual worlds; Embodied AI; Embodied Tracking and Navigation; Visual RL;" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9247c3d195e6669207405eceecc1e1b8531a7b3a.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "UnrealCV Zoo: Enriching Photo-realistic Virtual Worlds for Embodied AI Agents" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vQFw9ryKyK
ImagineNav: Prompting Vision-Language Models as Embodied Navigator through Scene Imagination
main
Active
Robotics;Visual Navigation;Vision-Language Model;Scene Imagination
applications to robotics, autonomy, planning
3;5;5;6
4;4;4;3
2;3;3;3
2;3;2;3
2;3;2;2
4.75
3.75
2.75
2.5
2.25
-0.662266
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Some details are very unclear, especially on the novel view synthesis (i.e. future view imagination). What data is used to train the diffusion model? Is it in or out of distribution for indoor navigation? How about resolution and the speed? \n- More qualitative visualizations of the environments, imagined views, waypoint distribution would be nice.\n- The structure of the VLM analysis output in Fig. 3 and Fig. 4 are inconsistent. Which one is used in evaluation?\n- Why is the success rate lower with NVS model added in Table. 2 (row 3 vs 5)? More explanation is needed.\n- How does this method compare with end-to-end approaches like SPOC [1]?\n\n[1] Ehsani, Kiana, et al. \"SPOC: Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The system design significantly reduces complexity of an open-world object-goal navigation pipeline, and better integrates sensor observation with large models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes \"ImagineNav\", a mapless navigation framework for robots using vision-language models (VLMs) to perform object search in open environments. It replaces traditional perception + mapping and text-based LLM-based planning with a pipeline where future scene views are \"imagined\" using novel view synthesis and analyzed by a VLM to choose optimal next waypoint. The candidate waypoints are generated by the Where2Imagine module, which is learned from human-like navigation patterns. Tested on benchmarks like HM3D and HSSD, ImagineNav significantly outperformed baselines in terms of success rate and success rate weighted by path length." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are 3 fundamental limitations of this approach\n- First, there is no way to guarantee that the diffusion model used to synthesize the novel views understands object permanence. How often does it hallucinate non-existing object or leave out objects that should be there? How does the rest of the pipeline deal with this?\n- There is no way to ensure the waypoints generated by Where2Imagine is reachable, especially in out-of-distribution scenarios. The method still needs sine kind of local map for low-level path planning. Some tests on real robot or new environments will erase this concern.\n- The method does not deal with the ambiguity of object reference. For example, navigate to the chair (example used in Fig. 2) is very ambiguous as there might be many chairs in the environment." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "No extra problems." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper proposes a relatively novel navigation method: using novel view synthesis as a form of \"imagination\" for indoor navigation.\n2. Due to the inclusion of a VLM and diffusion model, this work achieves a higher success rate in open-ended tasks.\n3. The writing in this paper is clear, the illustrative figures are accurate, and the explanations of the proposed framework are well-articulated." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a framework called ImagineNav, designed to enable effective visual navigation for robots in a mapless, open-vocabulary setting. ImagineNav leverages Vision-Language Models (VLMs) with on-board RGB/RGB-D camera inputs, without complex mapping and localization procedures. Instead of traditional planning approaches, it translates the navigation task into a series of \"best-view\" image selection problems, where the VLM selects optimal viewpoints based on imagined future observations. This approach views navigation as a simplified selection problem and showcases the potential of imagination-driven guidance for robotic systems." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There are some concerns about the imagination module. The ImagineNav framework relies on the diffusion models for novel view synthesis. However, these could lead to some mistakes e.g. non-existing objects in the scene. This could lead to incorrect navigation decisions by the VLM and reduce the overall success rate.\n2. A small concern about this approach is its performance in multi-room or occluded scenarios. The use of human habits without interactive perception could cause the robot to become trapped in local optima, preventing it from locating the target.\n3. The framework seems to only rely on immediate views for navigation decisions, without fully utilizing historical information from the navigation process. This lack of memory may limit the robot's ability to explore effectively and plan global paths over long distances or in intricate environments.\n\nIn summary, the method is novel (only to me, but if other reviewers show some related work, I will defer it). But the quality of the generated novel view (especially the emergence of non-existing objects) is concerned. If the author can explain more about this, I would increase the score." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. While the provided failure case intuitively demonstrates a failure mode of the method, it would be valuable to include an approximate distribution of failure modes, such as how many failures are due to inaccurate imagined novel views." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This work introduces mapless navigation through the prediction of future 3D waypoints, generating possible image observations at these waypoints, and selecting the most promising next waypoint with VLM. The method is well-motivated and thoroughly described, making it feasible for the community to replicate the results.\n2. The paper includes an extensive experimental evaluation of the HM3D and HSSD benchmarks, with the proposed method achieving notable improvements over baselines on the challenging HM3D and HSSD benchmarks.\n3. A comprehensive ablation study is provided in Tables 2, 3, 4, and 5, highlighting the significance of different components within the proposed pipeline. Showing the effectiveness of using visual prompting and novel view thesis, as well as the waypoint prediction.\n5. Figure 5 also offers some failure cases to help readers understand the method’s limitations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work presents an innovative framework for mapless visual navigation that utilizes Vision-Language Models (VLMs) to streamline open-vocabulary navigation. Unlike conventional approaches that depend on mapping and text-based planning, this method relies solely on RGB/RGB-D inputs, redefining navigation as a task of selecting the best view image. Using the Where2Imagine module, it generates potential viewpoints based on human-like navigation tendencies, enabling the VLM to select the most suitable view to locate target objects. The NVS module then generates potential views of the next waypoint. This approach results in efficient paths without mapping and enhances success rates on standard benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper claims that object detection, segmentation, and mapping modules increase computational demand for robots. However, the proposed method introduces a computationally intensive novel view synthesis module to generate imagined observations. A comparison of computational load would strengthen this claim.\n\n2. The paper’s organization could be improved for clarity. The text references to Tables and Figures are sometimes distant from the actual tables or figures; repositioning these elements could improve the flow and clarity of the paper.\n\n3. Although GPT4o-mini is a robust VLM model, comparisons with recent open-source VLMs featuring 3D spatial understanding would enhance this work, such as:\n\n* Spatial VLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities\n\n* SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Why do you think that waypoints are the correct intermediate representation for navigation and object search?\nWhat is the point of having images from NVS as it limits performance and rather uses the embeddings to train a navigation policy?\nWhat do you think are the limitations of the VLM high-level planning?\nHow can you get the VLM to learn online from additional data?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is mostly clear and effective in conveying its points. There is an effective set of baselines indicating thorough experimental evidence but the addition of error bars would be helpful to determine significance. There are also effective sections discussing the failed and successful trajectories and clear ablations of the imagination module and the high-level planning module. The originality of having NVS methods apply to scene-level settings is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces ImagineNav, a novel framework that leverages VLMs for open-vocabulary object navigation without relying on explicit mapping or localization. The framework acts on a discrete action space spanning different views, then predicts candidate relative poses from the current observation, then uses NVS to generate images of those poses, and lastly uses a VLM for planning. The problem is posed as a best-view image selection problem for the VLM. Empirical experiments on challenging open-vocabulary object navigation benchmarks demonstrate that ImagineNav outperforms existing methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I think the main novelty within the method is the where2imagine module because prior work has used VLMs for high-level planning and NVS methods. I think the underlying claim is that waypoint as an intermediate representation (to predict poses and images with NVS) is a better reasoning intermediate than other intermediate representations and better than learning an end-to-end model. I think further investigating this claim could further add novelty to the paper as it would tackle a more fundamental question of are modular systems better than end-to-end models for object search and navigation. I would also try to rewrite the conclusion and portions of the introduction for clarity (paragraph 2). Lastly, the discussion of successful and failed trajectories are useful but I would like to see how to address those failed trajectories within the framework." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a mapless visual navigation system by proposing a imagination-based visual prompting for pre-trained large vision-language models." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024imaginenav,\ntitle={ImagineNav: Prompting Vision-Language Models as Embodied Navigator through Scene Imagination},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vQFw9ryKyK},\nnote={under review}\n}" }, "abstract": { "value": "Visual navigation is an essential skill for home-assistance robots, providing the object-searching ability to accomplish long-horizon daily tasks. Many recent approaches use Large Language Models (LLMs) for commonsense inference to improve exploration efficiency. However, the planning process of LLMs is limited within texts and it is difficult to represent the spatial occupancy and geometry layout only by texts. Both are important for making rational navigation decisions. In this work, we seek to unleash the spatial perception and planning ability of Vision-Language Models (VLMs), and explore whether the VLM, with only on-board camera captured RGB/RGB-D stream inputs, can efficiently finish the visual navigation tasks in a mapless manner. We achieve this by developing the imagination-powered navigation framework ImagineNav, which imagines the future observation images at valuable robot views and translates the complex navigation planning process into a rather simple best-view image selection problem for VLM. To generate appropriate candidate robot views for imagination, we introduce the Where2Imagine module, which is distilled to align with human navigation habits. Finally, to reach the VLM preferred views, an off-the-shelf point-goal navigation policy is utilized. Empirical experiments on the challenging open-vocabulary object navigation benchmarks demonstrates the superiority of our proposed system." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Robotics", "Visual Navigation", "Vision-Language Model", "Scene Imagination" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/bc6d453324e66d2a04ef61e3ad92ee80e9ccfda5.pdf" }, "presentation": null, "primary_area": { "value": "applications to robotics, autonomy, planning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/e7c72759e8817f0898747612b716af670f726138.pdf" }, "title": { "value": "ImagineNav: Prompting Vision-Language Models as Embodied Navigator through Scene Imagination" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vQIVbfTMzf
Adapting to both finite-sample and asymptotic regimes
main
Withdraw
algorithmic adaptivity;empirical risk minimization;finite-sample regime;asymptotic regime.
learning theory
Qiang Sun
~Qiang_Sun2
1;3;3;6
4;3;5;3
2;2;1;4
1;2;1;3
1;2;2;4
3.25
3.75
2.25
1.75
2.25
-0.46442
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Can you comment (at the very least numerically) on the case where the noise has Pareto or Cauchy distribution?\n2. Would you be able to offer some numerical evidence that picking $a=0.5$ is indeed a good choice?\n3. Could you comment on the differences between your approach and [1]?\n\n\n\n[1] A. Owen 2006, \"A robust hybrid of lasso and ridge regression\"" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper is a nice exercise in basic statistics and probability. The proofs seem to be correct." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors study the empirical risk minimisation problem on a variant of the Huber loss for estimating a scalar signal corrupted by a noise with mean zero and finite variance. In practice, this consists in optimising jointly for the signal we wish to recover and a \"robustification\" parameter." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I believe this paper requires some fundamental changes in order to be published in any venue.\n\nThe title is really not informative and should be changed. What is being adapted \"to both finite sample and asymptotic regimes\"? That is not clear until all the paper has been read.\n\nThe paper is nearly impossible to read, as it is inconsistent in stating its results and the precise model under consideration.\n\nThe introduction is confusing. One one hand it contains irrelevant details (like the definition of sub-gaussian variables) and on the other hand is really generic: it states that in practice you have noisy data, but this doesn't automatically imply a heavy tailed distribution. As a matter of fact, the paper simply studies sub-gaussian noise with mean zero and finite variance.\n\nThe specific loss that you are using is redefined a number of times, until finally defining 2.5. Everything before that in section 2 is superfluous. Additionally, you fix $a=0.5$ by looking at the population loss. This to me is a fundamentally unjustified choice.\n\nThe significance of the contribution is also not clear to me. If I were to look at the sklearn implementation of Huber regression I would also find a modification of the Huber loss where one optimise for the estimator and a \"robustness\" parameter [1], which is in essence the same idea of the paper.\n\n\nMinor points:\nThere are a number of inaccuracies and inconsistencies. I list some of the most glaring ones.\n\n1. Line 34: what assumed Gaussian shape?\n2. Line 36: I am inferring you mean that the mean is not finite. If so it would be better to write it.\n3. Line 59: it's not clear the role of the parameter $\\tau$\n4. Line 74: $d$ was not defined before.\n5. Line 140: when referring to sub-Gaussian performance, it would be good to point to the relevant equation before\n6. Line 141: adding a new parameter $v$ serves no purpose as $v = \\sigma$\n\n\n[1] A. Owen 2006, \"A robust hybrid of lasso and ridge regression\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Reading the paper, I was curious about two points that I think the authors could clarify:\n\n1. How similar is their loss to the Ronchetti & Huber one? Given that they bring this point up, I think they should be clearer about the connections;\n\n2. Though I see that the loss function is tailored specifically to the robustification of the quadratic loss (it is quadratic near the origin, linear away from it), do the authors think that their self-tuning approach could be extended to more general loss functions?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The paper tackles a fundamental and ubiquitous problem in statistical learning, i.e., robust mean estimation. It does so by providing an optimization method that is shown to be efficient from an estimation point of view (both in finite samples and asymptotically), as well as from a computational point of view (the joint optimization of the mean and robustification parameter is more efficient than, for instance, cross-validation methods).\n\nThe paper is also extremely clear in its presentation of the results, their motivation, and the intuition behind them." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents an empirical risk minimization (ERM) approach incorporating concomitant scaling, designed to eliminate the need for tuning a robustification parameter when handling heavy-tailed data. The method introduces a novel loss function that simultaneously optimizes both the mean and robustification parameters. By jointly optimizing these parameters, the robustification parameter adapts automatically to the unknown variance in the data, making the method self-tuning. The authors highlight improvements over previous approaches in both computational and estimation efficiency. The method circumvents the need for cross-validation or Lepski’s method for tuning, while the estimator's variance meets the Cramer-Rao lower bound, signifying optimal asymptotic efficiency. This approach is described as algorithmically adaptive to both finite-sample and large-sample contexts, demonstrating consistent performance across these regimes. Numerical experiments further support the efficacy of the proposed methodology." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Here are a few minor weaknesses:\n\n1. Subsection 3.2 sounds a bit off compared to the rest of the paper (the language is sloppy and the presentation not as clear), I think it could be better rewritten\n\n2. There's a typo in the captions of Figures 3 and Figure 5 (\"distributution\")" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "-\tCan the authors clarify the conditions in Theorem 3.1 regarding the inequalities involving $v_0, V_0, \\sigma$? Providing concrete examples with classical distributions and details when these conditions are met under relevant parameter settings would be helpful." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The paper aims to tackle a fundamental problem raised from Catoni’s seminar result that attains optimal finite-sample and asymptotic performance guarantee." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper discusses a novel mean estimator such that it has the optimal finite-sample and asymptotical guarantees in some sense. Their method requires no parameter to tune and is computational efficient. They also validate their result with some other classical estimators in some simulation studies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-\t**Missing Comparisons**: It is concerning that the authors have repeatedly avoided comparing their work to [Lee and Valiant 2021], which addresses optimal sub-Gaussian mean estimation without knowledge of the variance. Previous reviewers have highlighted this omission, but it remains unaddressed. The estimator in [Lee and Valiant 2021] achieves optimal finite-sample guarantees and asymptotic variance (“as accurate as the sample mean for the Gaussian of matching variance”) without requiring prior knowledge of the exact variance. The omission is significant, as [Lee and Valiant 2021] presents a concentration result with an optimal dependence factor of $\\sigma (1 + o(1))$ before the $\\sqrt{2\\log(1/\\delta)}$, while the current submission’s concentration result (e.g., Theorem 3.2) shows a gap and depends on some numerical constant $C$.\n-\t**Self-tuning property without knowing the variance**: Contrary to the paper’s claims, the proposed estimator does not fully exhibit the self-tuning property for unknown variance. It requires upper and lower bounds on the variance (as outlined in Section 3), yet the authors provide no practical guidance on selecting these bounds in the numerical experiments. By comparison, the results in [Lee and Valiant 2021] do not rely on such assumptions.\n-\t**Paper Scope**: The current title is too wide and could be made more precise. Adding specific terms like “mean estimator” would make the title better reflect the paper scope and contributions.\n-\t**Inconsistent Notation**: Notable inconsistencies appear throughout the main body, where $\\hat\\mu (\\hat v) $ and $\\hat\\mu(\\tau) $ are used interchangeably. Additionally, it would improve clarify $\\sigma_u$ is defined before $\\sigma_{x^2}$, as the notation for sigma_{x^2}$ is too specific.\n\n\n[Lee and Valiant 2021] Optimal Sub-Gaussian Mean Estimation in R. FOCS 2021." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Maybe the $\\log(n)$ factor could be removed in Lemma F.1 and so from the final bound by using the chaining argument (e.g., Section 3 in the book of Pollard 1990 on Empirical Processes)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "As the authors pointed out, mean estimation using the pseudo-Huber loss with adaptive robustification parameter has been studied before (Ronchetti & Huber, 2009), and understanding its theoretical properties would be a significant contribution to the field." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper considers empirical risk minimization (ERM) using the pseudo-Huber loss so that it jointly optimizes the robustification parameter of the loss function during the ERM process for the task of estimating the mean of an scalar random variable from i.i.d. samples in the general case when the unknown distribution of the scalar random variable is assumed only to have finite variance. \nThe papers goal is to provide a self-tuning estimator which can achieve a sharp upper bound on the generalization error.\n\nThis is a resubmission with small changes, the paper has been reviewed for TMLR under the title \"Do we need to estimate the variance in robust mean estimation?\": https://openreview.net/forum?id=CIv8NfvxsX. Unfortunately, the paper does not address any of the main concerns which were raised by the rejection for its previous version." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I agree to the main concerns raised for the rejection of the previous version of this paper: https://openreview.net/forum?id=CIv8NfvxsX. \n\nThe generalization bound of Theorem 3.2 is far from optimal, and there is an other algorithm already out there which reaches the optimal performance and not even mentioned in this paper by Lee and Valiant (FOCS, 2021): https://doi.org/10.1109/FOCS52979.2021.00071. I believe this result is too relevant to be ignored here.\n\nThe algorithm of the paper is not completely self-tuning, it has two tuning parameters $v_0$ and $V_0$ which need to bracket the unknown variance. This issue should be discussed because it challenges the adaptivity of the presented algorithm.\n\nThe bound of Theorem 3.2 is scaled by an extra $\\sqrt{\\log(n)}$ factor compared to the optimal performance. Worse, the \"constant\" C is scaled by the standard deviation $\\sigma$ as mentioned in line 1480 and the derivation above, and it seems to me that so it can even grow arbitrarily large. So it is not clear then what is the real dependence of the bound of Theorem 3.2 on $\\sigma$." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "An estimator that performs well in both finite-sample and large-sample regimes." }, "_bibtex": { "value": "@misc{\nsun2024adapting,\ntitle={Adapting to both finite-sample and asymptotic regimes},\nauthor={Qiang Sun},\nyear={2024},\nurl={https://openreview.net/forum?id=vQIVbfTMzf}\n}" }, "abstract": { "value": "This paper introduces an empirical risk minimization based approach with concomitant scaling, which eliminates the need for tuning a robustification parameter in the presence of heavy-tailed data. This method leverages a new loss function that concurrently optimizes both the mean and robustification parameters. Through this dual-parameter optimization, the robustification parameter automatically adjusts to the unknown data variance, rendering the method self-tuning. Our approach surpasses previous models in both computational and asymptotic efficiency. Notably, it avoids the reliance on cross-validation or Lepski's method for tuning the robustification parameter, and the variance of our estimator attains the Cram'{e}r-Rao lower bound, demonstrating optimal efficiency. In essence, our approach demonstrates optimal performance across both finite-sample and large-sample scenarios, a feature we describe as \\textit{algorithmic adaptivity to both asymptotic and finite-sample regimes}. Numerical studies lend strong support to our methodology." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Qiang_Sun2" ] }, "authors": { "value": [ "Qiang Sun" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "algorithmic adaptivity", "empirical risk minimization", "finite-sample regime", "asymptotic regime." ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "sun|adapting_to_both_finitesample_and_asymptotic_regimes" }, "pdf": { "value": "/pdf/f27feb4dfa0dc35807037b83a1c02f17c150de36.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/5a33bb1dd51d66caac60c0f6e3aaf7f288ebef57.pdf" }, "title": { "value": "Adapting to both finite-sample and asymptotic regimes" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vQhn4wrQ6j
Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models
main
Active
model souping;model merging;cross-lingual transfer;multilingual;math;mathematical reasoning;LLM;SFT
transfer learning, meta learning, and lifelong learning
5;6;8
4;4;4
4;3;4
2;3;3
3;3;4
6.333333
4
3.666667
2.666667
3.333333
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. You write that combining layers works because models are fine-tuned for just a little bit: why not tune for longer?\n2. What would the intuitive explanation be behind the findings of your preliminary analysis on which layers are changed by task-tuning and language-tuning, other examples of similar effects in related literature?\n3. If you were to tune for longer, would that affect the results of the preliminary analysis and change more layers, or different layers?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "* Interesting idea and its evaluation on 1 model and 4 languages, with additional experiments\n* Although the setup raises some questions (limited evaluation, why not freeze layers and avoid having to soup the transition layers, etc.), the expanded evaluation on Swahili and the limitations section address most of these\n* excellent writing, justification and presentation" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new approach to perform zero-shot cross-lingual transfer for solving tasks in a new language. The requirement is to have data for the task in a more resourced language (e.g. English) and non-task-specific data for other languages (4 languages in this paper). The idea is to fine-tune the model (Llama 3.1, 8B in the paper) separately to one new language and another copy to the task in English, then compose a new model of layers of these two models. A study is included of how much the models change during fine-tuning, which concludes that math tasks cause changes in the fine-tuned model closer to the middle layers, while tuning on a new language causes changes in the first and last layers. Based on that the composed model takes task-layers from the math-tuned model's middle, and language-layers from the language-tuned model's start and end. Transfer between the layers of two independently tuned models is done by \"souping\" intermediate layers, i.e. averaging the weights of layers of both models. Gains up to 10% are shown, in comparison to the expert models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* limited evaluation: only 1 model and one set of tasks (math)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How does a re-composed model in one language affect model performance in typologically similar languages? In my opinion, an analysis of this kind would highly benefit the work. \n\n2. Would a 2-stage SFT work better than Joint SFT on language and task?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The proposed methodology is highly practical in scenarios where one might have publicly available task-specific data in a high-resource language and generic instruction data in the low-resource language. The model parameter adjustments being fully post-hoc eliminate any additional computational overhead apart from the initial fine-tuning required to create task and language experts.\n\n2. *layer swapping* with the best configuration consistently outperforms the individual SFT experts, the base LLM Llama 3.1 and the general model souping approach in three (Swahili, Telugu, Bengali) out of the four languages under this study. \n\n3. The paper is easy to follow. The authors also take the effort to acknowledge the possible limitations to the work, encouraging future exploration." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a variant of model merging addressing the challenge of adapting LLMs for tasks in low-resource languages. The methodology involves *layer swapping* (parameters being swapped) between a task expert and a language expert, both following the same underlying architecture. The resulting re-composed LLM is said to outperform both the individual experts without the requirement of a task-specific fine-tuning in the low-resource language. The experiments involve evaluation of the proposed methodology on MGSM (a multilingual benchmark of grade-school math problems) on 4 resource-scarce languages: Japanese, Telugu, Swahili and Bengali." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The methodology is evaluated only on Llama 3.1, using MGSM benchmark for 4 selective languages. In my opinion, evaluation of the method on a single model, single benchmark and limited languages makes the conclusion less generalizable. While the languages used in the study are diverse, incorporating more datasets and models (in terms of different architecture or pre-training) can strengthen the conclusion.\n\n2. The assumption of availability of generic instruction data for low-resource languages might not hold for all languages. Task-specific data and generic instruction data in a high-resource language is generally more accessible. An experiment where language expert is fine-tuned using translated instructions would increase the practicality of the work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can the authors clarify if they have tried layer swapping on other tasks, such as translation, question-answering, or code generation? Evidence of generalizability beyond math would make the approach significantly stronger.\n2. Layer swapping is positioned as an alternative to methods like model souping, but a comparison to more recent modular fine-tuning techniques, such as adapters (Pfeiffer et al., 2020) or LoRA (Hu et al., 2022), could help contextualize its relative strengths. Can the authors either conduct these comparisons (e.g. computational efficiency, performance) or discuss the anticipated performance differences? \n3. Given that this study uses a 8B (32-layer) LLM, how would the authors anticipate the method scaling to models with more layers or parameters? Could they provide guidance on applying layer swapping to larger models, especially in terms of choosing the number of layers to swap? Would the authors still anticipate the same performance gain with layer swapping on larger models? \n4. For Japanese, layer swapping results in lower average performance compared to the individual math experts. The authors mentioned that the Japanese experts were the weakest as performance across BELEBELE, FLORES, MBPP, and MMLU were minimal. Could the authors share the results on these benchmark (before and after SFT) to help better understand the case?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper introduces an efficient, innovative layer-swapping method for zero-shot cross-lingual transfer in LLMs, addressing the lack of task-specific data in low-resource languages with simplicity and strong empirical results.\n\n- This technique is particularly notable for its straightforward implementation, allowing effective merging of task and language expertise without complex adjustments, making it a practical alternative to standard methods like model souping. \n\n- Promising experimental gains on math reasoning benchmarks across multiple low resource languages validate the method's effectiveness, showing that layer swapping successfully enhances cross-lingual transfer without in-language task data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a novel layer-swapping methodology for zero-shot cross-lingual transfer in large language models (LLMs) aimed at improving task performance in non-English languages, particularly for mathematical reasoning. The approach tackles the lack of task-specific data in low-resource languages by fine-tuning two separate \"experts\" of the same base model: one trained on English mathematical data and another trained on general instruction data in the target language. The proposed method then selectively replaces the top and bottom transformer layers of the math expert with those from the language expert, buffered by transition zones between these regions. This configuration shows promising performance gains on the MGSM math benchmark across four languages—Swahili, Telugu, Bengali, and Japanese—without any additional in-language math data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The method is tested only on math reasoning, leaving it unclear if layer swapping generalizes to other tasks. Additional evaluations on tasks like question-answering or translation would strengthen the claims of broad applicability.\n- While the paper mentions different layer-swapping configurations, it lacks in-depth analysis on which configurations work best and why. A more detailed study of these choices would help to better understand the method make it more robust. For example, provide ablation studies on the number of swapped layers or transition zone sizes, or to analyze how performance changes as these parameters are varied.\n- Comparisons to recent modular fine-tuning techniques, such as adapters or LoRA, are missing. Including these would clarify how layer swapping performs relative to other efficient, cross-lingual methods." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We transfer math skills to non-English languages simply by swapping in a few layers from a model fine-tuned on those languages into a model fine-tuned on math." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024layer,\ntitle={Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vQhn4wrQ6j},\nnote={under review}\n}" }, "abstract": { "value": "Model merging, such as model souping, is the practice of combining different models with the same architecture together without further training. In this work, we present a model merging methodology that addresses the difficulty of fine-tuning Large Language Models (LLMs) for target tasks in non-English languages, where task-specific data is often unavailable. We focus on mathematical reasoning and without in-language math data, facilitate cross-lingual transfer by composing language and math capabilities. Starting from the same pretrained model, we fine-tune separate \"experts\" on math instruction data in English and on generic instruction data in the target language. We then replace the top and bottom transformer layers of the math expert directly with layers from the language expert, which consequently enhances math performance in the target language. The resulting merged models outperform the individual experts and other merging methods on the math benchmark, MGSM, by 10% across four major languages where math instruction data is scarce. In addition, this layer swapping is simple, inexpensive, and intuitive, as it is based on an interpretative analysis of the most important parameter changes during the fine-tuning of each expert. The ability to successfully re-compose LLMs for cross-lingual transfer in this manner opens up future possibilities to combine model expertise, create modular solutions, and transfer reasoning capabilities across languages all post hoc." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "model souping", "model merging", "cross-lingual transfer", "multilingual", "math", "mathematical reasoning", "LLM", "SFT" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f6a80efaaff839fc0cb9ca62d0904f54eaff92b4.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vQxqcVGrhR
DisEnvisioner: Disentangled and Enriched Visual Prompt for Customized Image Generation
main
Active
Visual Disentanglement and Enrichment;Zero-shot Customization;Text-to-Image Generation
generative models
3;5;6;6
4;4;4;3
2;3;3;3
1;3;3;3
2;2;3;3
5
3.75
2.75
2.5
2.5
-0.471405
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- During the training of DisVisioner, how is disentanglement achieved? Given that the diffusion model’s objective imposes a loss on both the object and the background, it’s unclear how only subject information is learned for the subject token. Could the authors provide visualization examples or experiments of the learned disentangled tokens?\n- What types of augmentations are used during DisVisioner’s training?\n- Can this method be extended to use multiple images as conditioning to enhance reference image details?\n- Since DisVisioner’s training requires the class name from ImageNet for prior initialization, does this limit its generalization to classes outside ImageNet?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well-written and easy to follow, with figures that clearly illustrate the concepts.\n- The visual results are impressive, with the original subject details well-preserved.\n- The topic is engaging and holds potential for practical application." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces DisEnvisioner, a method aimed at enhancing the customization capabilities of image diffusion models. The approach involves training two models: DisVisioner, which disentangles subject-specific features from subject-unrelated features, and EnVisioner, which enriches these disentangled features. Quantitative and qualitative experiments demonstrate that DisEnvisioner effectively extracts subject information while discarding unrelated details, significantly enhancing customization capabilities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Missing Ablations:** The importance of CLIP prior initialization and augmentation in training DisVisioner has not been fully investigated.\n- **Unclear Mechanism for Ensuring Disentanglement:** While the concept of disentangling subject-specific and unrelated features is intriguing, the method section does not clearly explain how this disentanglement is achieved. It remains unclear what guarantees that only subject-related information is retained." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "Yes, Discrimination / bias / fairness concerns" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.Could you please explain how subject features and irrelevant features are learned, and why they can be disentangled by transformer blocks without additional supervision?\n\n2.Please improve the quantitative comparison experiment regarding the EFFECT OF DISVISIONER (Ablation study).\n\n3.Please improve the quantitative comparison experiment regarding the EFFECT OF ENVISIONER (Ablation study).\n\n4.Please improve the quantitative comparison experiment on the Ablation of token numbers in DisVisioner, and supplement the comparison experiments for $n_s=1, n_i=0$ and $n_s=0, n_i=1$." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper introduces DisEnvisioner, a novel framework that addresses significant challenges in customized image generation by focusing on feature disentanglement and enrichment. The originality lies in the identification of the crucial role that subject-essential attributes play in the customization process, which is a new viewpoint that goes beyond existing methods reliant on either tuning or tuning-free approaches.\n\nThe clarity of the paper is commendable. The authors present their arguments logically, making it easy for readers to follow their reasoning and understand the significance of their contributions. Key terminologies and concepts like \"subject-essential attributes\" and \"feature disentanglement\" are well-defined" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces DisEnvisioner, a novel approach aimed at enhancing image customization from visual prompts while addressing the limitations of existing methods in interpreting subject-essential attributes.Empirical evidence demonstrating the superiority of DisEnvisioner over existing methods in various aspects, including instruction response (editability), ID consistency, inference speed, and overall image quality." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.The advancement of the methodology has not been sufficiently demonstrated; some indicators (e.g. image-alignment for ID-consistency (C-I, D-I) as shown in Table 1) are inferior to other similar methods (e.g., IP-Adapter, BLIP-Diffusion, etc.),you cloud provide more evidence to support your claim in abstract \"Experiments demonstrate the superiority of our approach over existing methods in instruction response (editability), ID consistency,\"\n\n2.The EFFECT OF DISVISIONER has not been adequately explained; there is a lack of comparative experiments with and without DISVISIONER to validate its effectiveness. You cloud compare results with and without the DiVisioner component while keeping other parts of the system constant.\n\n3.The EFFECT OF ENVISIONER has not been sufficiently substantiated; the paper's explanation regarding the EFFECT OF ENVISIONER only presents a few cases (as shown in Fig. 8), which lacks persuasiveness.You could provide a larger-scale comparison by human evaluation or image-alignment for ID-consistency (C-I, D-I) to measure the improvement in ID consistency or image quality.\n\n4.Ablation on token’s numbers in DisVisioner lack of QUANTITATIVE result.You might measure performance across various metrics (like those in Table 1) for different token number configurations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Could the authors provide more theoretical insights into the disentanglement mechanism and why it works well for customized image generation? The current explanation is mostly empirical.\n\nHow does DisEnvisioner compare to other recent Transformer-based text-to-image generation methods? Adding these comparisons could help readers better appreciate the novelty of the approach." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Originality: The introduction of DisVisioner and EnVisioner for disentangled visual prompt processing is a novel approach, effectively addressing the challenge of maintaining subject identity while generating customized images.\n\nQuality: The experiments are well-conducted, demonstrating the effectiveness of the proposed method in preserving ID consistency and reducing subject-irrelevant features.\n\nSignificance: The approach enables high-quality, customized image generation in a tuning-free manner, which is practical and efficient for real-world applications.\n\nClarity: The overall structure of the paper is well-organized, and the experimental results effectively showcase the advantages of the proposed method over existing baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper, titled \"DisEnvisioner: Disentangled and Enriched Visual Prompt for Customized Image Generation,\" presents a novel approach to generating customized images from visual prompts with additional textual descriptions. The main contributions are:\n\n1. Proposing DisEnvisioner, a tuning-free framework that effectively disentangles subject-essential attributes from irrelevant features, improving the overall quality of customized image generation.\n\n2. Introducing a two-stage architecture with DisVisioner to separate subject-essential and irrelevant features, and EnVisioner to enhance subject consistency.\n\n3. Demonstrating through experiments that the proposed approach outperforms existing methods in terms of editability, identity (ID) consistency, and inference speed while requiring minimal fine-tuning or reference images." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Limited Theoretical Justification: The paper lacks sufficient theoretical grounding for the proposed disentangling mechanism. For example, the method for separating subject-essential and irrelevant features relies primarily on empirical observations without rigorous theoretical backing.\n\nComparison with Existing Methods: The paper does not provide extensive comparisons with some of the most recent advancements in text-to-image generation (e.g., Transformer-based approaches). Including a wider range of baselines would help position this work within the current state of the field." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Question: Could the authors elaborate on the specific factors that led DisEnvisioner to underperform IP-Adapter in terms of instruction accuracy (C-I) and ID consistency (D-I)?\n\nSuggestion: Analyzing and explaining these performance gaps would provide useful insights into DisEnvisioner's design choices. This could also help clarify if the approach is inherently less suited to certain customization aspects or if there are areas where further tuning could close the performance gap.\n\nQuestion: Given the limited dataset, how confident are the authors in DisEnvisioner’s generalizability to a broader range of tasks and more diverse visual prompts?\n\nSuggestion: Evaluating DisEnvisioner on additional datasets with varied visual content and textual instructions could better assess its robustness. Additional results on a larger dataset could substantiate the claims of effectiveness and versatility.\n\nQuestion: How does the feature enrichment step improve ID consistency, and could the authors provide more details on its implementation?\n\nSuggestion: Adding a more in-depth explanation of the enrichment process and its effect on ID consistency would be beneficial. Additionally, showing comparisons before and after this step could better demonstrate its impact on maintaining subject integrity." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "DisEnvisioner demonstrates several strengths over competing methods in the areas of customization quality, editability, identity consistency, and efficiency, as shown in the provided results:\n\nCustomization Quality (C-T): With a score of 0.315, DisEnvisioner shows the highest performance in customization (C-T), indicating that it excels at incorporating subject-specific details while staying true to the given instructions.\n\nID Consistency (D-I): DisEnvisioner scores 0.802 for ID consistency (D-I), surpassing most other methods except for IP-Adapter. This score reflects its ability to maintain subject identity throughout image generation, reducing unwanted attribute drift.\n\nInstruction Response (C-I): DisEnvisioner scores 0.828, slightly lower than some methods like IP-Adapter and DreamBooth but still within a strong range. This indicates good responsiveness to textual instructions while retaining visual prompt characteristics.\n\nInference Speed (IV): DisEnvisioner achieves a lower inference value of 0.026, indicating faster inference compared to methods like DreamBooth and DisenBooth, making it efficient for real-time or rapid customization needs.\n\n*Runtime (T)**: DisEnvisioner has a runtime of 1.96 seconds, placing it on par with IP-Adapter, which is one of the fastest. This efficient runtime makes it more practical for applications needing quick processing.\n\nMean Rank (mRank): With an mRank of 2.0, DisEnvisioner achieves the highest overall ranking among the methods tested, suggesting that it consistently performs well across different evaluation metrics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces DisEnvisioner, a novel tuning-free method for generating customized images from a single visual prompt enriched with additional textual instructions. Existing image generation methods, both tuning-based and tuning-free, often struggle to isolate the essential attributes of a subject in the visual prompt, leading to unwanted, subject-irrelevant features that compromise customization quality, editability, and identity (ID) preservation. DisEnvisioner addresses this by disentangling the subject-essential features from irrelevant details, separating them into distinct visual tokens. This separation improves customization precision and allows for enhanced ID consistency. By further refining these disentangled features, DisEnvisioner creates a more granular representation of the subject, which bolsters the model's ability to maintain ID consistency across generations. Experimental results demonstrate that DisEnvisioner outperforms current methods in editability, ID consistency, inference speed, and overall image quality, establishing its effectiveness and efficiency for personalized image generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "DisEnvisioner, while showcasing significant advancements, has some limitations that temper its overall impact:\n\nLack of State-of-the-Art Results Across All Metrics: DisEnvisioner does not outperform all baseline models in every metric. For instance, its performance in instruction response (C-I) and identity consistency (D-I) does not reach the top scores achieved by IP-Adapter and DreamBooth. This mixed performance limits DisEnvisioner’s claim to outright superiority across all customization aspects.\n\nLimited Test Dataset: The dataset used for evaluating DisEnvisioner is relatively constrained, potentially affecting the generalizability of the results. A more extensive and varied dataset would provide a clearer picture of the model's adaptability and robustness across diverse tasks and use cases.\n\nRoom for Improved Analysis on Underperformance Against IP-Adapter: Although DisEnvisioner demonstrates strengths in disentangling features, it would benefit from further analysis on why it lags behind IP-Adapter in specific tasks like instruction accuracy and ID consistency. Understanding these discrepancies could inform targeted improvements to make DisEnvisioner more competitive in these areas.\n\nThese aspects suggest that while DisEnvisioner makes notable contributions, there is room for further development to enhance its consistency and broaden its applicability across a wider range of tasks." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "DisEnvisioner effectively identifies and enhances the subject-essential feature while filtering out other irrelevant information, enabling exceptional image customization in a tuning-free manner and using only a single image." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024disenvisioner,\ntitle={DisEnvisioner: Disentangled and Enriched Visual Prompt for Customized Image Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vQxqcVGrhR},\nnote={under review}\n}" }, "abstract": { "value": "In the realm of image generation, creating customized images from visual prompt with additional textual instruction emerges as a promising endeavor. However, existing methods, both tuning-based and tuning-free, struggle with interpreting the subject-essential attributes from the visual prompt. This leads to subject-irrelevant attributes infiltrating the generation process, ultimately compromising the personalization quality in both editability and ID preservation. In this paper, we present $\\textbf{DisEnvisioner}$, a novel approach for effectively extracting and enriching the subject-essential features while filtering out -irrelevant information, enabling exceptional customization performance, in a $\\textbf{tuning-free}$ manner and using only $\\textbf{a single image}$. Specifically, the feature of the subject and other irrelevant components are effectively separated into distinctive visual tokens, enabling a much more accurate customization. Aiming to further improving the ID consistency, we enrich the disentangled features, sculpting them into a more granular representation. Experiments demonstrate the superiority of our approach over existing methods in instruction response (editability), ID consistency, inference speed, and the overall image quality, highlighting the effectiveness and efficiency of DisEnvisioner." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Visual Disentanglement and Enrichment", "Zero-shot Customization", "Text-to-Image Generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/7cc8af99c7c961519ca74603e6a174ec141d53fb.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/38c6ab83f62e8cb5017150a237a2861595666ec2.pdf" }, "title": { "value": "DisEnvisioner: Disentangled and Enriched Visual Prompt for Customized Image Generation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vR2MWaZ3MG
Matchmaker: Schema Matching with self-improving compositional LLM programs
main
Active
schema matching;data-centric AI;Large Language Models;healthcare
other topics in machine learning (i.e., none of the above)
3;3;5;8
4;4;4;4
2;3;3;4
2;2;2;3
1;4;4;4
4.75
4
3
2.25
3.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Why haven't you submitted this to a database conference? It seems to me that the reception and impact there would be much higher." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Very well written \n- Addresses an importan tproblem that has ben recently identified as a target for the ML community\n- Provides a thorough experimental section, including a (very nice) ablation study to understand the impact of different strategies for candidate generation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper deals with schema matching, and old but very important problem in databases. The idea is that, given one starting (relational) schema and one target schema, to be able to match which attributes in the starting schema correspond to attributes in target schema. The proposal in this paper is Matchmaker. This system uses a mix of retrieval using multi-vector representation and LLM-based reasoning to produce candidates for the matching, and then applies a final LLM-driven step to refine these candidates. Notably, the program also is built so that it can optimize the last step by providing examples from the databases. All together, the system shows quite an advantage over previous proposals." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While the paper is well writtend and scientifically sound, the algorithm itself (matchmaker) is not groundbreaking. Matchmaker resolves basically on building appropriate chain of thought prompts, as well as applying semantic similarity techniques. As such, I see this mostly as a paper describing a particular, LLM-based proposal, to address this problem. \n- There seem to be a lack of LLM-dirven alternatives to compare with, which is both good for the paper (because authors are the first to apply them in this context), but it also raises the question on whether any other similar approach would produce similar results. \n- The problem itself ( schema matching, or more generally data harmonization/interoperability) is not a core problematic of ICLR. I would imagine this paper would be more suitted to a database conference." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Q1: How does your approach work in terms of precision when compared with the baseline method?\n\nQ2: How confident when ranking the LLM-generated candidates with LLM-based scores? How much does this ranking contribute to the results?\n\nQ3: Could you provide the details of how the vector retrieval works in Sec 4.1?\nIf I understand well, you retrieve the top-k matching from the target schema attribute based on MaxSim between query embeddings and target schema embeddings. However, the granularity of the query embeddings and target schema embedding is different, query embedding is an attribute-level embedding while target schema embedding is a table-level embedding." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "S1: The idea of leveraging the multi-stage LLMs for schema matching is novel.\n\nS2: The authors do a great job of demonstrating the challenges of schema matching in real-world scenarios that they are trying to address and the methodology they presented. \n\nS3: Several experiments on MIMIC-OMOP and Synthea-OMOP datasets are conducted to empirically investigate and demonstrate the performance of the presented method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a zero-shot schema matching approach by leveraging the multi-stage call of LLMs to generate, refine, and score the matchings. Specifically, it introduces synthetic examples to guide the reasoning of LLMs for improving and optimizing the results of schema matching. The experiments on medical schema matching benchmarks demonstrate that the proposed approach outperforms the selected baseline methods on accuracy. \n\nThe paper does a great job of demonstrating the problem that they are solving and the methodology they presented. \nThe main contribution is decomposing the schema matching task into multi-stage sub-tasks that are completed by multiple calling of LLMs, with retrieval from contextual reasoning and prompt optimization based on in-context examples. However, the contribution of this work is limited, as they only introduce the muti-stage schema matching by extending the calling of LLMs from single to multiple." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1: The contribution of this work is limited, as they only introduce the muti-stage schema matching by extending the calling of LLMs from single to multiple. \n\nW2: The accuracy@k is the only metrics that is reported in experimental results, the results on precision are missing. \n\nW3: The prompts are provided in the appendix, but the source code is not provided for reproducibility.\n\nW4: The GPT-4 (0613) is the only backbone model, the results of using llama as backbone are not reported." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "General comments\n\nThe schema matching problem is fundamental to generating large,\nintegrated, and interoperable datasets. In this sense, this paper\nmakes a contribution by proposing a new method for this problem that\ntakes advantage of certain capabilities of LLMs. Besides, the\nexperimental evaluation provides evidence that the proposed method\noutperforms other schema matching approaches in terms of\naccuracy. However, I have two serious concerns about the contribution\nof this paper:\n\n- The schema matching problem is not properly formalized. The authors\n provide a formal definition of this problem where they indicate that\n mapping function f \"correctly\" assigns each attribute of the source\n schema to an attribute of the target schema. How is the\n \"correctness\" of f defined? What are the properties that a correct\n mapping function f should satisfy? Does the algorithm proposed in\n this paper compute a function f that satisfies such properties? None\n of these questions is answer in the paper.\n\n- The authors disregard the large body of work on schema matching that\n has been developed in the database area. I am not going to mention\n here the relevant literature on schema matching, which is an\n established area within databases, but I would like to note that one\n can already find surveys on this topic from more than 20 years ago:\n\n Erhard Rahm, Philip A. Bernstein: A survey of approaches to\n automatic schema matching. VLDB J. 10(4): 334-350 (2001)\n\n Notice that the problem considered in this paper is schema matching\n for relational databases, which is exactly the scenario discussed in\n this survey.\n\n A first obvious step in considering the work on schema matching done\n in databases is to compare the method proposed in this paper with\n methods from the database field. But this is just the tip of the\n iceberg and probably not the most fruitful way. The method proposed\n in this paper can be improved by considering the more classical work\n in schema matching. For example, the authors could leverage this\n work in the generation of candidates, and they can address issues\n with the formalization of the schema matching problem by reusing its\n formalization from the database field.\n\n\nSpecific questions\n\nAll these questions refer to the definition of the schema matching\nproblem:\n\n- How is the notion of correctness of a mapping function f defined?\n\n- What are the properties that a correct mapping function f should\n satisfy?\n\n- Does the algorithm proposed in this paper compute a function f that\n satisfies such properties?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "S1) The schema matching problem is fundamental to generating large,\n integrated, and interoperable datasets. In this sense, this paper\n addresses a relevant and interesting problem.\n\nS2) The experimental evaluation shows that the proposed approach\n outperforms other schema matching approaches in terms of accuracy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors considered the schema matching problem for\ntabular structured data. In particular, a schema is defined as a set\nof tables {T_1, ..., T_m}, where each table T_i has a set of\nattributes {A_{i,1}, ..., A_{i,n_i}}. Additionally, it is assumed that\neach table T_i is associated with some metadata describing its purpose\nand content, and each attribute A_{i,j} is associated with some\nmetadata describing its type and relational context. Then, given a\nsource schema S including a set of attributes A_S and a target schema\nT including a set of attributes A_T, the goal of schema matching is to\nfind a partial function f : A_S -> A_T such that if f(A) = B for\nattributes A in A_S and B in A_T, then B is the corresponding target\nattribute to the source attribute A.\n\nThe algorithm proposed in the paper for schema matching works as\nfollows. Given a source attribute A, the algorithm uses embeddings of\nA and the target attributes to retrieve the top-k matching target\nattributes. The generated set of target attributes is called the\nsemantic retrieval candidates for A. Then the algorithm generates a\nset of reasoning-based candidates using a reasoning LLM. The union of\nsemantic retrieval candidates with reasoning-based candidates\nconstitutes the set of candidates for A. Then the algorithm uses a\nrefiner LLM to reduce the number of candidates, and it finally ranks\nthe resulting candidates and filters out the non-suitable ones. If the\nresulting set of target attributes is not empty, then the top-scored\nattribute can be considered as the corresponding target attribute for\nA." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1) The approach proposed in the paper essentially disregards the\n large body of work on schema matching that has been developed for\n decades in the database field.\n\nW2) The schema matching problem is not properly formalized." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. How does the performance of Matchmaker scale with very large schemas (>1000 attributes)?\n2. Could the approach be extended to handle many-to-many mappings between schemas?\n3. How sensitive is the performance to the quality of attribute descriptions in the schemas?\n4. What strategies could be employed to reduce the number of LLM calls while maintaining performance?\n5. How might the system handle schemas in languages other than English?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. Addresses a critical practical problem in data integration that has significant implications for ML development\n2. Novel technical approach combining retrieval and LLM reasoning in a compositional program\n3. Zero-shot learning capability through synthetic in-context examples, eliminating the need for labeled training data\n4. Comprehensive empirical evaluation against multiple baselines\n5. Practical considerations for deployment, including human-in-the-loop integration and uncertainty handling\n6. Strong quantitative results showing 15-20% improvement over baselines\n7. Well-documented implementation details and ablation studies" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Matchmaker, a novel approach to schema matching using a self-improving compositional language model program. Schema matching is crucial for creating interoperable ML-ready data by finding correspondences between attributes across different databases. Matchmaker operates through three main components: multi-vector document creation, candidate generation (using both semantic retrieval and LLM-based reasoning), and confidence scoring. A key innovation is its ability to self-improve without labeled data through synthetic in-context examples. The method significantly outperforms existing approaches on real-world healthcare schema matching benchmarks (MIMIC-OMOP and Synthea-OMOP)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited evaluation to two healthcare domain datasets, though the approach is claimed to be general\n2. The synthetic in-context example generation process could be explained more clearly\n3. The paper shows strong performance metrics but doesn't provide detailed analysis of where and why Matchmaker fails" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "schema matching across heterogenous data sources using compositional language model programs" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024matchmaker,\ntitle={Matchmaker: Schema Matching with self-improving compositional {LLM} programs},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vR2MWaZ3MG},\nnote={under review}\n}" }, "abstract": { "value": "Schema matching -- the task of finding matches between attributes across disparate data sources with different tables and hierarchies -- is critical for creating interoperable machine learning (ML)-ready data. Addressing this fundamental data-centric problem has wide implications, especially in domains like healthcare, finance and e-commerce --- but also has the potential to benefit ML models more generally, by increasing the data available for ML model training. However, schema matching is a challenging ML task due to structural/hierarchical and semantic heterogeneity between different schemas. Previous ML approaches to automate schema matching have either required significant labeled data for model training, which is often unrealistic, or suffer from poor zero-shot performance. To this end, we propose Matchmaker - a compositional language model program for schema matching, comprised of candidate generation, refinement and confidence scoring. Matchmaker also self-improves in a zero-shot manner without the need for labeled demonstrations via a novel optimization approach, which constructs synthetic in-context demonstrations to guide the language model's reasoning process. Empirically, we demonstrate on real-world medical schema matching benchmarks that Matchmaker outperforms previous ML-based approaches, highlighting its potential to accelerate data integration and interoperability of ML-ready data." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "schema matching", "data-centric AI", "Large Language Models", "healthcare" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e20dceeb29cc326e1dad0d3ea61fefbeb5c6feb3.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/89913888d9ce7fe7e0959dc6e2e76ee44de7c364.pdf" }, "title": { "value": "Matchmaker: Schema Matching with self-improving compositional LLM programs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vRvVVb0NAz
When is Task Vector Provably Effective for Model Editing? A Generalization Analysis of Nonlinear Transformers
main
Active
Task arithmetic;generalization;nonlinear Transformers;deep learning theory;machine unlearning
learning theory
5;6;6;8
3;2;2;3
3;3;3;4
3;3;3;4
2;3;3;4
6.25
2.5
3.25
3.25
3
0.229416
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I don't have questions." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is very well-written and easy to follow.\n- It provides a guideline for when and why task arithmetic works in multi-task learning, machine unlearning, and generalization to new tasks.\n- The discussion of low-rank approximations and magnitude-based pruning of task vectors supports the use of efficient approximation techniques in task arithmetic fine-tuning.\n- This is the first known theoretical generalization analysis of task vector arithmetic in nonlinear Transformer-based models, filling a notable gap in the literature.\n- The theoretical claims are validated through empirical experiments on the Phi-1.5 language model and Colored-MNIST image classification, adding practical credibility to the proposed framework." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the theoretical aspects of task vector arithmetic as a model editing technique for multi-task learning, unlearning, and out-of-domain generalization. The authors provide a theoretical analysis to justify why and when task vector methods are effective in nonlinear Transformer models, especially for binary classification tasks. They prove that task addition facilitates multi-task learning for aligned or irrelevant tasks, while task negation can effectively unlearn contradictory or irrelevant tasks. Additionally, they offer generalization guarantees for out-of-domain tasks and theoretical justification for task vector approximations. These findings are empirically validated through various experiments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The theoretical analysis relies on a single-head, one-layer Transformer model, which may limit the applicability of the results to more complex multi-layer Transformer architectures.\n- While the empirical validation includes a large language model and a basic image classification task, the study could benefit from a broader set of tasks, including more complex or structured tasks beyond binary classification.\n- Although the theoretical framework outlines conditions for selecting arithmetic coefficients, more practical guidelines or analyses for tuning these coefficients in real-world applications would be beneficial.\n\ntypos:\n- line 288, \"fine-turning\" -> \"fine-tuning\"\n- line 388, \"are are\" -> \"are\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Line 236-238, the conventional attention expression is $softmax(W_QXX^TW_K^T)W_VX$, why is it written as $W_VXsoftmax(X^TW_K^TW_QX)$ in Formula 4?\n* Line 236-238, what is the meaning of $X^n$?\n* Line 242, Why is $x_i$ used here, while $X$ is used in Formula 4?\n* Line 261, Since $\\mu_T$ and $v_j$ are orthogonal, what is the meaning of tokens corresponding to $\\mu_T$?\n* How to quantify the relevance of different language generation tasks?Are semantically similar and task-related equivalent?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* Very comprehensive mathematical analysis and theoretical proofs\n* Discussion on the task vector is extensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work focuses on the task vector in the context of task arithmetic, demonstrating its role in learning and unlearning tasks through experiments. It’s an interesting topic that needs more exploration. The authors find that task correlation affects the performance of the task vector and conduct theoretical justification on the low-rank approximation of the task vector and it's ability to adapt to out-of-domain tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* There are some issues with the paper's writing (Formula 4 and Definition 2 in the Section 3.2 is confusing).\n* In the language generation task, only a model with 1.5B parameters is used, and the experimental results are not fully meet expectations (also a noticeable performance loss in so-called irrelevant task)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I found this paper highly engaging and believe it would attract even greater interest with an exploration of the theory across a broader set of tasks, particularly generative tasks. For example, could the proposed theoretical framework be extended to multiclass classification as an initial step? A discussion on how these insights might be applied to a wider range of tasks would substantially enhance the paper's appeal. I would be willing to increase my score if the authors could provide even preliminary ideas on these extensions." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This paper is the first to theoretically examine the generalization of task vectors, filling a significant gap in current research.\n- Writing is well-written and easy to follow.\n- The theoretical contributions are well-supported by experiments, effectively bridging theory and empirical validation.\n- The theoretical insights align well with intuitive expectations regarding the effects of task vectors across aligned, irrelevant, and contradictory tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper provides a theoretical analysis of the effectiveness of task vector methods for model editing in transformers. The authors investigate the conditions under which task addition (for multi-task learning) and task negation (for unlearning) are effective, proving that task correlation plays a crucial role. They also establish conditions for successful out-of-domain generalization using task vectors. Experiments with both synthetic and real-world data validate the key concepts of the proposed theory." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Although insightful, the data model may be overly simplistic for capturing the complexities of real-world data. For example, even for simple yes/no questions, a negation word in a sentence may flip the relevance of certain words in the sentence, which cannot be captured by the proposed data model. I wonder if some theoretical aspects can be generalized independently of this data model.\n- The analysis is restricted to a one-layer transformer with limited nonlinearity, despite claims in the title and introduction regarding the challenges of analyzing nonlinearity in task vectors." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Definition 2 -- what is the dimension of $mu_{\\tau}$? is it the same as $v$?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- Very exciting theoretical result that includes very practical aspects (e.g., hyperparameters of the fine-tuning, how to set alpha for each task vectors, etc)\n- Novel characterization of generalization conditions on 1 layer transformer, building up on previous work that uses NTK assumptions\n- Practical scenarios in terms of the relation between tasks (aligned vs irrelevant vs contradictory)\n- Nicely written and relatively accessible to non-theory people. I particularly like the remark after each theorem that explains what the theory is and what does it imply\n- Nice setup on CMNIST that reflects the aligned vs. irrelevant vs. contradictory condition, followed by a nice non-toy experiment." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper produces the first theoretical result on the ability of task vectors to generalize to new tasks and ability to unlearn a task. The study is conducted under the lens of \"aligned\" vs \"irrelevant\" vs \"contradictory\" tasks. The authors studies the necessary fine-tuning hyperparameters and the task vector coefficients ($alpha$) that enable generalization and unlearning.\n\nAdditionally, the authors also verify the theory result with experiments on toy CMNIST dataset and on next token prediction task." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "N/A -- good paper overall :)" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We provide the first theoretical characterization of the generalization guarantees of task vector methods on nonlinear Transformers." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024when,\ntitle={When is Task Vector Provably Effective for Model Editing? A Generalization Analysis of Nonlinear Transformers},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vRvVVb0NAz},\nnote={under review}\n}" }, "abstract": { "value": "weighted sum of task vectors, each of which is the weight update from the pre-trained model to fine-tuned models for certain tasks. This approach recently gained attention as a computationally efficient inference method for model editing, e.g., multi-task learning, forgetting, and out-of-domain generalization capabilities. However, the theoretical understanding of why task vectors can execute various conceptual operations remains limited, due to the highly non-convexity of training Transformer-based models. To the best of our knowledge, this paper provides the first theoretical characterization of the generalization guarantees of task vector methods on nonlinear Transformers. We consider a conceptual learning setting, where each task is a binary classification problem based on a discriminative pattern. We theoretically prove the effectiveness of task addition in simultaneously learning a set of irrelevant or aligned tasks, as well as the success of task negation in unlearning one task from irrelevant or contradictory tasks. Moreover, we prove the proper selection of linear coefficients for task arithmetic to achieve guaranteed generalization to out-of-domain tasks. All of our theoretical results hold for both dense-weight parameters and their low-rank approximations. Although established in a conceptual setting, our theoretical findings were validated on a practical machine unlearning task using the large language model Phi-1.5 (1.3B)." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Task arithmetic", "generalization", "nonlinear Transformers", "deep learning theory", "machine unlearning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4676f567d52b245ecec66005bba08c30ae56a95e.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "When is Task Vector Provably Effective for Model Editing? A Generalization Analysis of Nonlinear Transformers" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
vSrBzCzg4G
Efficient Training of Sparse Autoencoders for Large Language Models via Layer Clustering
main
Active
Sparse Autoencoders (SAEs);Meta Learning;Mechanistic Interpretability;Large Language Models (LLMs)
interpretability and explainable AI
3;3;3;3;3
4;4;3;3;5
1;3;1;2;2
3;1;1;2;1
1;2;3;2;3
3
3.8
1.8
1.6
2.2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Improving LLM interpretability is an important topic. The proposed method of speeding up SAEs is straightforward and easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes to reduce the computational overhead of SAEs: instead of training a separate SAE for each layer, it groups the layers to several groups of adjacent layers and learns an SAE for each group." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Overall I feel that the results presented in this work are quite obvious and expected, and I do not see a large contribution to the community. \n1. The novelty of this work is limited. It seems to be an obvious choice for one to learn an SAE for each group of adjacent layers. \n2. Based on the experimental results (on a 12 layer 160M model), the speed up provided by the method is limited. The speed up also always comes with a drop of the quality of the model. Based on Figure 3 and Figure 4, the drop seems to be almost linear to k. This is quite expected with any types of \"simple\" speed up such as down sampling and grouping (like this paper suggested)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1) Please consider introducing dimensions of various vectors and matrices in Sections 2.1 and 2.2. Also, what is the relationship between $d$ and $dmodel$ (Line 112-113)? It appears that $d = dmodel$?\n\n2) Please formally define the term \"residual stream\".\n\n3) In Section 3.1, what is the justification/motivation for using *JumpReLU*? \n\n4) As for the hierarchical clustering strategy described in Lines 212 - 220, is it clear that one will only put consecutive layers in one group? Can non-consecutive layers be clustered into a single cluster? If yes, is this desirable?\n\n5) Figure 3 presents multiple metrics, CE loss, $R^2$, $L_2$, $L_1$. Out of these, which one is more important? Also, looking at some of the figures, the difference in the metric value for different $k$ is very small. What is the significance of this small difference?\n\n6) In Figure 7, Human interpretability scores appear to be *non-monotonic* with respect to $k$. Could authors comment on this?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1) The paper focuses on important and timely questions related to the interpretability of LLMs. \n2) The proposed method successfully improves the training efficiency of SAEs for LLMs by grouping similar layers. \n3) Empirical evaluation based on both reconstruction error and downstream performance showcases the utility of the proposed approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper builds on the recent line of work that relies on sparse autoencoders (SAEs) to address the interpretability of large language models (LLMs). In particular, SAEs aim to decompose LLM activations in a layer as a sparse combination of a large number of (interpretable) features. However, prior works require training one SAE per LLM layer (component), resulting in a large number of parameters and prohibitively high compute cost needed to obtain good quality SAEs to understand the inner workings of the LLM.\n\nThis paper leverages similarities among consecutive layers in an LLM to reduce the training cost for SAEs. The paper proposes to cluster LLM layers in $k$ groups and then train one SAE for each group of layers. Based on the reconstruction error of original representations; downstream performance on tasks focused on indirect object identification, greater than relationship, and subject-verb agreement; and human evaluations, the paper argues that the proposed approach results in good quality SAEs for Pythia 160M LLM." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) The main weakness of the paper is its limited technical novelty and contributions. The reviewer believes that the proposed approach of grouping multiple similar layers and training one SAE per group does not constitute a significant contribution to the field. Furthermore, the empirical evaluation in the paper is restricted to a small language model (Pythia 160M) and focuses on very simplistic tasks. This does provide strong evidence of the value of the proposed method for realistic settings involving LLMs.\n\n2) There is a significant scope for improving the presentation of the paper. Many design choices in the paper are not well justified (see the Questions section below).\n\n3) The authors build on many recent prior works. The reviewer believes that the authors can provide a more comprehensive background of some of these works to make the paper self-contained. It would be helpful for the reader to know how SAEs can be utilized for a particular application while studying LLMs." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please answer whether my weakness is correct." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "* The paper provides comprehensive analysis on the end artifact of their work, such as detailed circuit analysis evals, interpretability studies and accuracy metrics.\n\n* The idea is easy to understand and the execution competently done.\n\n* The results on the circuit analysis evals look strong, as there's barely any performance hit to using the strategy according to that eval." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method that will make suites of Sparse Autoencoders (SAEs) easier to use (as they will require fewer SAEs) and easier to train (large compute saving). The method is to train SAEs on the activations from contiguous blocks of layers in the model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper claims that there is a $(L-1) / k$ efficiency saving through using their method. But unless I misunderstand, since there are a fixed number of tokens used $T$ (1B in this case), and there will always be $LT$ total activations which all SAEs are trained on, the number of FLOPs used to train the SAEs will be **the same** using this method or not. Since language model activation saving can be amoritized (e.g. https://arxiv.org/abs/2408.05147 or https://github.com/EleutherAI/sae) there is no theoretical benefit to saving LLM activation saving either.\n\nThe paper is titled as \"Efficient Training of Sparse Autoencoders...\" and hence unless I misunderstand some method, this paper does not achieve its goal and I cannot recommend it." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- How does the circuit analysis work with the multiple layer SAEs? Can features in the “same” layer be connected? If not, is this might be an unfair comparison to baselines, because there are less feaatures overall to choose from?\n- Why do you think the L2 of the lower Ks is higher?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The clustering of layers to find the best groups for training a shared SAE on is interesting\n- The evaluations of the interpretability and downstream performance of the SAEs are strong\n- The problem is mostly well motivated: methods to reduce the computational bottlenecks of training large SAEs are important." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a more efficient method of training SAEs that groups similar layers together and trains a single SAE on each group. The grouping is done using aglommerative clustering between layers, where the layer to layer similarity is defined by the average angular distance between layer activations across 5M tokens. The authors compare their method with 5 different values of k (number of groups) against baseline sparse autoencoders on standard SAE metrics (L0, R^2, CE Loss score, and L0), and find that it is worse on these metrics. They also compare against circuit faithfulness and completeness metrics, where their method slightly improves on baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The largest weakness of this work is that it is unclear how the proposed method works. Does it concatenate the layers? Does it train on an equivalent fraction of activations from each layer? Does it take in layer i and predict all of the other layers? \n- It is also unclear what this method actually improves on or tells us about, besides simply reducing the total number of SAEs trained (L0s, losses, and interpretability are all significantly worse). Does it actually use less flops (since its plausible that training on more layers requires more flops)? Does it tell us something about how many features are shared across layers, and which layers share features? \n- Because of the lack of experimental details, it is very unclear how this differs from prior work in this area: Residual Stream Analysis with Multi-Layer SAEs, https://arxiv.org/pdf/2409.04185\n- The paper contains many typos and rushed writing.\n- All bar plots should have a number showing the actual value of the bar, and error bars where they make sense.\n- This work only examines residual layers, which is in some sense the “easiest” setting for this idea." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "# Major Comments\n*The following are things that, if addressed, might increase my score*\n- My overall assessment here is that it's quite plausible that this technique works, just on priors, and would be valuable to people training suites of SAEs on every layer's residual stream like Gemma Scope. I like the spirit of this paper, and think it has the potential to be a solid paper, but that right now there's significant room for improvement and I am not currently convinced of the core thesis based on the evidence in the paper. **If rebuttals go well, I'm open to increasing my score to a 6 or possibly 8.**\n- There are several crucial missing details in the paper, that significantly alter how the results are to be interpreted. I wouldn't be comfortable accepting the paper until these are clarified positively, or follow-up experiments are done which clarify, especially re the amount of compute used. (Note: I tried to check the paper fairly hard for these details, but I apologise if I missed them somewhere)\n - Is a layer group SAE trained on the same number of tokens as a normal SAE? (and so on significantly more activations, since there's one per layer in the group) If so, the claims of a speedup is false, this should take the same about of compute. There are minor benefits, like having fewer features to care about for eg probing, and fewer SAE parameters, but these are comparatively minor I think. To get a speedup, you need to run it on num_tokens / num_layers, for the same amount of training compute as a normal SAE (and less LLM running compute). It needs to be a fair comparison, as SAEs trained for longer generally perform better on evals, so the current evals can't be trusted.\n - What does \"96 features\" mean in the human interpretability study? Is it per SAE, per SAE family (ie a set of SAEs covering all 12 layers, either grouped or baseline), or 96 total? 96 isn't that much, so this significantly changes how much to trust the study. \n - With the circuit finding evaluations, when you do mean ablations to find completeness or faithfulness, do you do mean ablations to each layer one at a time, with a separate forwards pass per layer? Or to all layers at once?\n - If it's one pass per layer, then is it just averaged to form the faithfulness and completeness graphs?\n - If all layers are done at once, does N features mean per layer, or total?\n - Are you including error terms (ie the original act - reconstruction, as done in Marks et al), or not?\n - Including error terms makes comparisons harder - if an SAE is bad, its error terms are more informative, boosting performance\n - Meanwhile, if you don't include it and take ablations at every layer, I'm very skeptical that you could get the completeness results you do. In my experience, applying SAEs at every layer at once tanks model performance to the point of near randomness\n- You say you use JumpReLU activations, but also that you use an L1 loss, unlike the JumpReLU paper's L0 loss + straight-through estimators. The threshold goes into a discrete function and so always has gradient zero and will never update unless straight-through estimators are used, suggesting that in this setup the thresholds are always zero? This is OK, it's just a ReLU SAE, but is misleading\n- There are various things that make it messier to apply: it's not clear how many groups to make (especially on larger models!), I'm not convinced there's not a big performance degradation when done in a compute matched way, it's not clear how well these results generalise (in particular to the larger models where this is expensive enough to matter), etc. I know these are not realistic to fully address in the rebuttal, but here are some experiments that would strengthen the work:\n - Just doing the cosine sim analysis and layer grouping across several more models, including larger ones, and seeing how many groups are needed to get max angular distance below some threshold (eg whatever it took for 4 groups here). This should be fairly cheap, I think\n - Replicating these results on a larger model, eg Gemma 2 2B. Training a full suite would of course be fairly prohibitive, but eg finding an early pair of layers with low angular distance, and showing a compute matched grouped SAE performs comparably with an SAE per layer, would be compelling and should not be too expensive, especially with a low expansion factor, low number of tokens, and stopping execution of the LLM after the target layer.\n\n\n# Minor Comments\n*The following are unlikely to change my score, but are comments and suggestions that I hope will improve the paper, and I leave it up to the authors whether to implement them, either during rebuttals or after. No need to reply to all of them in the rebuttal*\n- The distribution of residual streams in each layer is going to be different, in particular the norm and the mean will vary. I expect you could get notably better performance by pre-computing and subtracting the mean and dividing by the average norm (after mean-centering), to make layers comparable. This mean and scaling factor could even be made learnable parameters. I'd be curious to see this tried.\n- Similarly, residual streams typically have significant non-zero mean (though I haven't investigated Pythia specifically) which makes cosine sim/angular distance harder to interpret, I'd be curious to see Figure 2 with mean-centered cosine sim (I don't expect major changes, but should be cleaner)\n- I hypothesise that a layer grouped SAE trained on 1B tokens and acts from each layer may perform comparably to one trained on 1B tokens and a randomly chosen layer's act per token, since the acts are likely to be highly similar. This would also be a big SAE training compute reduction!\n- The up to 6x speedup claim seems like an overclaim, it seems pretty clear to me that the K=1 setting performs badly enough to probably not be worth it. I'd say K=3 is the smallest that seems reasonable, so a 3x speedup (if compute matched)\n- In Figure 3, I think it's pretty confusing to average your stats across all layers, especially things like L2 which are *very* different across layers. I would recommend either normalising (eg dividing by the baseline value), or ideally providing the per-layer info compactly. For example, a bar chart with layer on the x axis, and a group of 6 bars at each place (one for each K, and baseline). Or a line chart where the x axis is layer, and there's a line for each K and baseline. I appreciated these being included in the appendix, but this could be mentioned prominently in the main text\n - I also recommend displaying the raw change in cross-entropy loss, not the normalised CE score. Ablating the residual stream is extremely damaging, so normalised scores are always very high, making it hard to interpret\n- Line 193: You say umpReLU is z * ReLU(z-theta), but it's actually z * H(z-theta), where H is the Heaviside function (1 if positive, 0 if negative)\n- Figure 2: It would be good to clarify in the caption or line 209 that the key thing to look at is the main diagonal (and that it's about each layer to the adjacent one, not to itself!), that lower means closer, and that 0.5 means \"totally perpendicular\". I figured this out eventually, but it would help to clarify it\n- I didn't find the discussion of MMCS on page 6 to be too informative. Without a baseline of comparing to another trained SAE on that layer, it's hard to really interpret what it should look like. I'd be fine with this being cut or moved to an appendix if you need the space\n- Line 337: It would be good to clarify that you do integrated gradients by intervening and linearly interpolating the *activations* not *input tokens*. [Marks et al](https://arxiv.org/abs/2403.19647) does it your way, [Hanna et al](https://arxiv.org/abs/2403.17806) does it on input tokens (and input tokens is the standard method, though IMO less principled than your's)" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- A fairly comprehensive set of evaluations is used, more than is common in such papers. I particularly liked the circuit based eval, modulo the concerns below\n- It's a fairly simple, elegant idea that I hadn't seen done before, and which could be a simple drop-in replacement to significantly save the cost of training suites of residual SAEs\n- Covered some key limitations clearly in the limitations section" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "- When training sparse autoencoders on the residual stream, there is often one per layer. This is likely redundant, as the residual streams at two adjacent layers are often fairly similar\n- The authors propose instead grouping layers by similarity of residual stream, using average angular distance, and training a single SAE for each group of layers. \n- The authors claim that grouping is a substantial speedup. However, the text implies (but does not explicitly state) that all SAEs are trained on 1B tokens. This means that an SAE for a group of eg 3 layers trains on 3e9 activations, while training 3 SAEs, one for each layer, trains on 1e9 activations, for the same total compute. This means it is not a speedup. If the grouped SAE was instead trained on 0.33B tokens, or even randomly sampled one layer's residual for each token, this would be a speedup. This is a crucial detail and needs to be clarified\n- The grouped SAEs are evaluated fairly carefully against the baseline of identically training an SAE per layer. The authors use a range of evaluations:\n - Standard metrics like L0, L2, CE Loss score\n - A circuit finding eval on several previously studied circuits. Authors use attribution patching to identify key SAE latents and calculate completeness and faithfulness. It does not seem to be stated whether there is a separate forward pass when ablating at each layer, or if all layers are ablated at at once.\n - A human interpretability study with 96 features. It is not specified whether this is 96 features per SAE, per SAE family, or total.\n- The overall conclusion is that quality and performance was preserved, which seems reasonable for K=4 or K=5 at least." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Numerous missing details, as discussed in the summary, which make it impossible to evaluate how impressive the results are\n- Only studies a single model\n- Others discussed below" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024efficient,\ntitle={Efficient Training of Sparse Autoencoders for Large Language Models via Layer Clustering},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=vSrBzCzg4G},\nnote={under review}\n}" }, "abstract": { "value": "Sparse Autoencoders (SAEs) have recently been employed as an unsupervised approach for understanding the inner workings of Large Language Models (LLMs). They reconstruct the model’s activations with a sparse linear combination of interpretable features. However, training SAEs is computationally intensive, especially as models grow in size and complexity. To address this challenge, we propose a novel training strategy that reduces the number of trained SAEs from one per layer to one for a given group of contiguous layers. Our experimental results on Pythia 160M highlight a 6x speedup without compromising the reconstruction quality and performance on downstream tasks. Therefore, layer clustering presents an efficient approach to train SAEs in modern LLMs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Sparse Autoencoders (SAEs)", "Meta Learning", "Mechanistic Interpretability", "Large Language Models (LLMs)" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ec5f7ec17dfe2b0e60d5f54560426675c10428df.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Efficient Training of Sparse Autoencoders for Large Language Models via Layer Clustering" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]